com

Navigating the US-China 5G competition

Executive summary: The United States and China are in a race to deploy fifth-generation, or 5G, wireless networks, and the country that dominates will lead in standard-setting, patents, and the global supply chain. While some analysts suggest that the Chinese government appears to be on a sprint to achieve nationwide 5G, U.S. government leaders and…

       




com

Why France? Understanding terrorism’s many (and complicated) causes

The terrible attack in Nice on July 14—Bastille Day—saddened us all. For a country that has done so much historically to promote democracy and human rights at home and abroad, France is paying a terrible and unfair price, even more than most countries. This attack will again raise the question: Why France?

       
 
 




com

The Marketplace of Democracy : Electoral Competition and American Politics


Brookings Institution Press and Cato Institute 2006 312pp.

Since 1998, U.S. House incumbents have won a staggering 98 percent of their reelection races. Electoral competition is also low and in decline in most state and primary elections. The Marketplace of Democracy combines the resources of two eminent research organizations—the Brookings Institution and the Cato Institute—to address the startling lack of competition in our democratic system. The contributors consider the historical development, legal background, and political aspects of a system that is supposed to be responsive and accountable yet for many is becoming stagnant, self-perpetuating, and tone-deaf. How did we get to this point, and what—if anything—should be done about it?

In The Marketplace of Democracy, top-tier political scholars also investigate the perceived lack of competition in arenas only previously speculated on, such as state legislative contests and congressional primaries. Michael McDonald, John Samples, and their colleagues analyze previous reform efforts such as direct primaries and term limits, and the effects they have had on electoral competition. They also examine current reform efforts in redistricting and campaign finance regulation, as well as the impact of third parties. In sum, what does all this tell us about what might be done to increase electoral competition?

Elections are the vehicles through which Americans choose who governs them, and the power of the ballot enables ordinary citizens to keep public officials accountable. This volume considers different policy options for increasing the competition needed to keep American politics vibrant, responsive, and democratic.


Brookings Forum: "The Marketplace of Democracy: A Groundbreaking Survey Explores Voter Attitudes About Electoral Competition and American Politics," October 27, 2006.

Podcast: "The Marketplace of Democracy: Electoral Competition and American Politics," a Capitol Hill briefing featuring Michael McDonald and John Samples, September 22, 2006.


Contributors: Stephen Ansolabehere (Massachusetts Institute of Technology), William D. Berry (Florida State University), Bruce Cain (University of California-Berkeley), Thomas M. Carsey (Florida State University), James G. Gimpel (University of Maryland), Tim Groseclose (University of California-Los Angeles), John Hanley (University of California-Berkeley), John mark Hansen (University of Chicago), Paul S. Herrnson (University of Maryland), Shigeo Hirano (Columbia University), Gary C. Jacobson (University of California-San Diego), Thad Kousser (University of California-San Diego), Frances E. Lee (University of Maryland), John C. Matsusaka (University of Southern California), Kenneth R. Mayer (University of Wisconsin-Madison), Michael P. McDonald (Brookings Institution and George Mason University), Jeffrey Milyo (University of Missouri-Columbia), Richard G. Niemi (University of Rochester), Natheniel Persily (University of Pennsylvania Law School), Lynda W. Powell (University of Rochester), David Primo (University of Rochester), John Samples (Cato Institute), James M. Snyder Jr. (Massachusetts Institute of Technology), Timothy Werner (University of Wisconsin-Madison), and Amanda Williams (University of Wisconsin-Madison).

ABOUT THE EDITORS

John Samples
John Samples directs the Center for Representative Government at the Cato Institute and teaches political science at Johns Hopkins University.
Michael P. McDonald

Downloads

Ordering Information:
  • {9ABF977A-E4A6-41C8-B030-0FD655E07DBF}, 978-0-8157-5579-1, $24.95 Add to Cart
  • {CD2E3D28-0096-4D03-B2DE-6567EB62AD1E}, 978-0-8157-5580-7, $54.95 Add to Cart
     
 
 




com

The Marketplace of Democracy: A Groundbreaking Survey Explores Voter Attitudes About Electoral Competition and American Politics

Event Information

October 27, 2006
10:00 AM - 12:00 PM EDT

Falk Auditorium
The Brookings Institution
1775 Massachusetts Ave., NW
Washington, DC

Register for the Event

Despite the attention on the mid-term races, few elections are competitive. Electoral competition, already low at the national level, is in decline in state and primary elections as well. Reformers, who point to gerrymandering and a host of other targets for change, argue that improving competition will produce voters who are more interested in elections, better-informed on issues, and more likely to turn out to the polls.

On October 27, the Brookings Institution—in conjunction with the Cato Institute and The Pew Research Center—presented a discussion and a groundbreaking survey exploring the attitudes and opinions of voters in competitive and noncompetitive congressional districts. The survey, part of Pew's regular polling on voter attitudes, was conducted through the weekend of October 21. A series of questions explored the public's perceptions, knowledge, and opinions about electoral competitiveness.

The discussion also explored a publication that addresses the startling lack of competition in our democratic system. The Marketplace of Democracy: Electoral Competition and American Politics (Brookings, 2006), considers the historical development, legal background, and political aspects of a system that is supposed to be responsive and accountable, yet for many is becoming stagnant, self-perpetuating, and tone-deaf. Michael McDonald, editor and Brookings visiting fellow, moderated a discussion among co-editor John Samples, director of the Center for Representative Government at the Cato Institute, and Andrew Kohut and Scott Keeter from The Pew Research Center, who also discussed the survey.

Transcript

Event Materials

     
 
 




com

The Competitive Problem of Voter Turnout

On November 7, millions of Americans will exercise their civic duty to vote. At stake will be control of the House and Senate, not to mention the success of individual candidates running for office. President Bush's "stay the course" agenda will either be enabled over the next two years by a Republican Congress or knocked off kilter by a Democratic one.

With so much at stake, it is not surprising that the Pew Research Center found that 51 percent of registered voters have given a lot of thought to this November's election. This is higher than any other recent midterm election, including 44 percent in 1994, the year Republicans took control of the House. If so, turnout should better the 1994 turnout rate among eligible voters of 41 percent.

There is good reason to suspect that despite the high interest, turnout will not exceed 1994. The problem is that a national poll is, well, a national poll, and does not measure attitudes of voters within states and districts.

People vote when there is a reason to do so. Republican and Democratic agendas are in stark contrast on important issues, but voters also need to believe that their vote will matter in deciding who will represent them. It is here that the American electoral system is broken for many voters.

Voters have little choice in most elections. In 1994, Congressional Quarterly called 98 House elections as competitive. Today, they list 51. To put it another way, we are already fairly confident of the winner in nearly 90 percent of House races. Although there is no similar tracking for state legislative offices, we know that the number of elections won by less than 60 percent of the vote has fallen since 1994.

The real damage to the national turnout rate is in the large states of California and New York, which together account for 17 percent of the country's eligible voters. Neither state has a competitive Senate or Governor's election, and few competitive House or state legislative races. Compare to 1994, when Californians participated in competitive Senate and governor races the state's turnout was 5 percentage points above the national rate. The same year New York's competitive governor's race helped boost turnout a point above the national rate.

Lacking stimulation from two of the largest states, turnout boosts will have to come from elsewhere. Texas has an interesting four-way governor's race that might draw from infrequent voters to the polls. Ohio's competitive Senate race and some House races might also draw voters. However, in other large states like Florida, Illinois, Michigan and Pennsylvania, turnout will suffer from largely uncompetitive statewide races.

The national turnout rate will likely be less than 1994 and fall shy of 40 percent. This is not to say that turnout will be poor everywhere. Energized voters in Connecticut get to vote in an interesting Senate race and three of five Connecticut House seats are up for grabs. The problem is that turnout will be localized in these few areas of competition.

The fault is not on the voters; people's lives are busy, and a rational person will abstain when their vote does not matter to the election outcome. The political parties also are sensitive to competition and focus their limited resources where elections are competitive. Television advertising and other mobilizing efforts by campaigns will only be found in competitive races.

The old adage of "build it and they will come" is relevant. All but hardcore sports fans tune out a blowout. Building competitive elections -- and giving voters real choices -- will do much to increase voter turnout in American politics. There are a number of reforms on the table: redistricting to create competitive districts, campaign financing to give candidates equal resources, and even altering the electoral system to fundamentally change how a vote elects representatives. If voters want choice and a government more responsive to their needs, they should consider how these seemingly arcane election procedures have real consequences on motivating them to do the most fundamental democratic action: vote.

Publication: washingtonpost.com
     
 
 




com

Midterm Elections 2010: Driving Forces, Likely Outcomes, Possible Consequences

Event Information

October 4, 2010
9:30 AM - 11:30 AM EDT

Falk Auditorium
The Brookings Institution
1775 Massachusetts Ave., NW
Washington, DC

As the recent primary in Delaware attests, this year's midterm elections continue to offer unexpected twists and raise large questions. Will the Republicans take over the House and possibly the Senate? Or has the Republican wave ebbed? What role will President Obama play in rallying seemingly dispirited Democrats -- and what effect will reaction to the sluggish economy play in rallying Republicans? Is the Tea Party more an asset or a liability to the G.O.P.'s hopes? What effect will the inevitably narrowed partisan majorities have in the last two year's of Obama's first term? And how will contests for governorships and state legislatures around the nation affect redistricting and the shape of politics to come?

On October 4, a panel of Brookings Governance Studies scholars, moderated by Senior Fellow E.J. Dionne, Jr., attempted to answer these questions. Senior Fellow Thomas Mann provided an overview. Senior Fellow Sarah Binder discussed congressional dynamics under shrunken majorities or divided government. Senior Fellow William Galston offered his views on the administration’s policy prospects during the 112th Congress. Nonresident Senior Fellow Michael McDonald addressed electoral reapportionment and redistricting around the country.

Video

Audio

Transcript

Event Materials

      
 
 




com

Target Compliance: The Final Frontier of Policy Implementation

Abstract Surprisingly little theoretical attention has been devoted to the final step of the public policy implementation chain: understanding why the targets of public policies do or do not “comply” — that is, behave in ways that are consistent with the objectives of the policy. This paper focuses on why program “targets” frequently fail to…

       




com

The Study of the Distributional Outcomes of Innovation: A Book Review


Editors Note: This post is an extended version of a previous post.

Cozzens, Susan and Dhanaraj Thakur (Eds). 2014. Innovation and Inequality: Emerging technologies in an unequal world. Northampton, Massachusetts: Edward Elgar.

Historically, the debate on innovation has focused on the determinants of the pace of innovation—on the premise that innovation is the driver of long-term economic growth. Analysts and policymakers have taken less interest on how innovation-based growth affects income distribution. Less attention even has received the question of how innovation affects other forms of inequality such as economic opportunity, social mobility, access to education, healthcare, and legal representation, or inequalities in exposure to insalubrious environments, be these physical (through exposure to polluted air, water, food or harmful work conditions) or social (neighborhoods ridden with violence and crime). The relation between innovation, equal political representation and the right for people to have a say in the collective decisions that affect their lives can also be added to the list of neglect.

But neglect has not been universal. A small but growing group of analysts have been working for at least three decades to produce a more careful picture of the relationship between innovation and the economy. A distinguished vanguard of this group has recently published a collection of case studies that illuminates our understanding of innovation and inequality—which is the title of the book. The book is edited by Susan Cozzens and Dhanaraj Thakur. Cozzens is a professor in the School of Public Policy and Vice Provost of Academic Affairs at Georgia Tech. She has studied innovation and inequality long before inequality was a hot topic and led the group that collaborated on this book. Thakur is a faculty member of the school College of Public Service and Urban Affairs at Tennessee State University (while writing the book he taught at the University of West Indies in Jamaica). He is an original and sensible voice in the study of social dimensions of communication technologies.

We’d like to highlight here three aspects of the book: the research design, the empirical focus, and the conceptual framework developed from the case studies in the book.

Edited volumes are all too often a collection of disparate papers, but not in this case. This book is patently the product of a research design that probes the evolution of a set of technologies across a wide variety of national settings and, at the same time, it examines the different reactions to new technologies within specific countries. The second part of the book devotes five chapters to study five emerging technologies—recombinant insulin, genetically modified corn, mobile phones, open-source software, and tissue culture—observing the contrasts and similarities of their evolution in different national environments. In turn, part three considers the experience of eight countries, four of high income—Canada, Germany, Malta, and the U.S.—and four of medium or low income—Argentina, Costa Rica, Jamaica, and Mozambique. The stories in part three tell how these countries assimilated these diverse technologies into to their economies and policy environments.

The second aspect to highlight is the deliberate choice of elements for empirical focus. First, the object of inquiry is not all of technology but a discreet set of emerging technologies gaining a specificity that would otherwise be negated if they were to handle the unwieldy concept of “technology” broadly construed. At the same time, this choice reveals the policy orientation of the book because these new entrants have just started to shape the socio-technical spaces they inhabit while the spaces of older technologies have likely ossified. Second, the study offers ample variance in terms of jurisdictions under study, i.e. countries of all income levels; a decision that makes at the same time theory construction more difficult and the test of general premises more robust.[i] We can add that the book avoids sweeping generalizations. Third, they focus on technological projects and their champions, a choice that increases the rigor of the empirical analysis. This choice, naturally, narrows the space of generality but the lessons are more precise and the conjectures are presented with according modesty. The combination of a solid design and clear empirical focus allow the reader to obtain a sense of general insight from the cases taken together that could not be derived from any individual case standing alone.

Economic and technology historians have tackled the effects of technological advancement, from the steam engine to the Internet, but those lessons are not easily applicable to the present because emerging technologies intimate at a different kind of reconfiguration of economic and social structures. It is still too early to know the long-term effects of new technologies like genetically modified crops or mobile phone cash-transfers, but this book does a good job providing useful concepts that begin to form an analytical framework. In addition, the mix of country case studies subverts the disciplinary separation between the economics of innovation (devoted mostly to high-income countries) and development studies (interested in middle and low income economies). As a consequence of these selections, the reader can draw lessons that are likely to apply to technologies and countries other than the ones discussed in this book.

The third aspect we would like to underscore in this review is the conceptual framework. Cozzens, Thakur and their colleagues have done a service to anyone interested in pursuing the empirical and theoretical analysis of innovation and inequality.

For these authors, income distribution is only one part of the puzzle. They observe that inequalities are also part of social, ethnic, and gender cleavages in society. Frances Stewart, from Oxford University, introduced the notion of horizontal inequalities or inequalities at the social group level (for instance, across ethnic groups or genders). She developed the concept to contrast vertical inequalities or inequalities operating at the individual level (such as household income or wealth). The authors of this book borrow Stewart’s concept and pay attention to horizontal inequalities in the technologies they examine and observe that new technologies enter marketplaces that are already configured under historical forms of exclusion. A dramatic example is the lack of access to recombinant insulin in the U.S., because it is expensive and minorities are less likely to have health insurance (see Table 3.1 in p. 80).[ii] Another example is how innovation opens opportunities for entrepreneurs but closes them for women in cultures that systematically exclude women from entrepreneurial activities.

Another key concept is that of complementary assets. A poignant example is the failure of recombinant insulin to reach poor patients in Mozambique who are sent home with old medicine even though insulin is subsidized by the government. The reason why doctors deny the poor the new treatment is that they don’t have the literacy and household resources (e.g. a refrigerator, a clock) necessary to preserve the shots, inject themselves periodically, and read sugar blood levels. Technologies aimed at fighting poverty require complementary assets to be already in place and in the absence of them, they fail to mitigate suffering and ultimately ameliorate inequality. Another illustration of the importance of complementary assets is given by the case of Open Source Software. This technology has a nominal price of zero; however, only individuals who have computers and the time, disposition, and resources to learn how to use open source operative systems benefit. Likewise, companies without the internal resources to adapt open software will not adopt it and remain economically tied to proprietary software.

These observations lead to two critical concepts elaborated in the book: distributional boundaries and the inequalities across technological transitions. Distributional boundaries refer to the reach of the benefits of new technologies, boundaries that could be geographic (as in urban/suburban or center/periphery) or across social cleavages or incomes levels. Standard models of technological diffusion assume the entire population will gradually adopt a new technology, but in reality the authors observe several factors intervene in limiting the scope of diffusion to certain groups. The most insidious factors are monopolies that exercise sufficient control over markets to levy high prices. In these markets, the price becomes an exclusionary barrier to diffusion. This is quite evident in the case of mobile phones (see table 5.1, p. 128) where monopolies (or oligopolies) have market power to create and maintain a distributional boundary between post-pay and high-quality for middle and high income clients and pre-pay and low-quality for poor customers. This boundary renders pre-pay plans doubly regressive because the per-minute rates are higher than post-pay and phone expenses represent a far larger percentage in poor people’s income. Another example of exclusion happens in GMOs because in some countries subsistence farmers cannot afford the prices for engineering seeds; a disadvantage that compounds to their cost and health problems as they have to use more and stronger pesticides.

A technological transition, as used here, is an inflection point in the adoption of a technology that re-shapes its distributional boundaries. When smart phones were introduced, a new market for second-hand or hand-down phones was created in Maputo; people who could not access the top technology get stuck with a sub-par system. By looking at tissue culture they find that “whether it provides benefits to small farmers as well as large ones depends crucially on public interventions in the lower-income countries in our study” (p. 190). In fact, farmers in Costa Rica enjoy much better protections compare to those in Jamaica and Mozambique because the governmental program created to support banana tissue culture was designed and implemented as an extension program aimed at disseminating know-how among small-farmers and not exclusively to large multinational-owned farms. When introducing the same technology, because of this different policy environment, the distributional boundaries were made much more extensive in Costa Rica.

This is a book devoted to present the complexity of the innovation-inequality link. The authors are generous in their descriptions, punctilious in the analysis of their case studies, and cautious and measured in their conclusions. Readers who seek an overarching theory of inequality, a simple story, or a test of causality, are bound to be disappointed. But those readers may find the highest reward from carefully reading all the case studies presented in this book, not only because of the edifying richness of the detail herein but also because they will be invited to rethink the proper way to understand and address the problem of inequality.[iii]
 


[i] These are clearly spelled out: “we assumed that technologies, societies, and inequalities co-evolved; that technological projects are always inherently distributional; and that the distributional aspects of individual projects and portfolios of projects are open to choice.” (p. 6)

[ii] This problem has been somewhat mitigated since the Affordable Healthcare Act entered into effect.

[iii] Kevin Risser contributed to this posting.

 

Image Source: © Akhtar Soomro / Reuters
     
 
 




com

The fair compensation problem of geoengineering


The promise of geoengineering is placing average global temperature under human control, and is thus considered a powerful instrument for the international community to deal with global warming. While great energy has been devoted to learning more about the natural systems that it would affect, questions of political nature have received far less consideration. Taking as a given that regional effects will be asymmetric, the nations of the world will only give their consent to deploying this technology if they can be given assurances of a fair compensation mechanism, something like an insurance policy. The question of compensation reveals that the politics of geoengineering are far more difficult than the technical aspects.

What is Geoengineering?

In June 1991, Mount Pinatubo exploded, throwing a massive amount of volcanic sulfate aerosols into the high skies. The resulting cloud dispersed over weeks throughout the planet and cooled its temperature on average 0.5° Celsius over the next two years. If this kind of natural phenomenon could be replicated and controlled, the possibility of engineering the Earth’s climate is then within reach.

Spraying aerosols in the stratosphere is one method of solar radiation management (SRM), a class of climate engineering that focuses on increasing the albedo, i.e. reflectivity, of the planet’s atmosphere. Other SRM methods include brightening clouds by increasing their content of sea salt. A second class of geo-engineering efforts focuses on carbon removal from the atmosphere and includes carbon sequestration (burying it deep underground) and increasing land or marine vegetation. Of all these methods, SRM is appealing for its effectiveness and low costs; a recent study put the cost at about $5 to $8 billion per year.1

Not only is SRM relatively inexpensive, but we already have the technological pieces that assembled properly would inject the skies with particles that reflect sunlight back into space. For instance, a fleet of modified Boeing 747s could deliver the necessary payload. Advocates of geoengineering are not too concerned about developing the technology to effect SRM, but about its likely consequences, not only in terms of slowing global warming but the effects on regional weather. And there lies the difficult question for geoengineering: the effects of SRM are likely to be unequally distributed across nations.

Here is one example of these asymmetries: Julia Pongratz and colleagues at the department of Global Ecology of the Carnegie Institution for Science estimated a net increase in yields of wheat, corn, and rice from SRM modified weather. However, the study also found a redistributive effect with equatorial countries experiencing lower yields.2 We can then expect that equatorial countries will demand fair compensation to sign on the deployment of SRM, which leads to two problems: how to calculate compensation, and how to agree on a compensation mechanism.

The calculus of compensation

What should be the basis for fair compensation? One view of fairness could be that, every year, all economic gains derived from SRM are pooled together and distributed evenly among the regions or countries that experience economic losses.

If the system pools gains from SRM and distributes them in proportion to losses, questions about the balance will only be asked in years in which gains and losses are about the same. But if losses are far greater than the gains; then this would be a form of insurance that cannot underwrite some of the incidents it intends to cover. People will not buy such an insurance policy; which is to say, some countries will not authorize SRM deployment. In the reverse, if the pool has a large balance left after paying out compensations, then winners of SRM will demand lower compensation taxes.

Further complicating the problem is the question of how to separate gains or losses that can be attributed to SRM from regional weather fluctuations. Separating the SRM effect could easily become an intractable problem because regional weather patterns are themselves affected by SRM.  For instance, any year that El Niño is particularly strong, the uncertainty about the net effect of SRM will increase exponentially because it could affect the severity of the oceanic oscillation itself. Science can reduce uncertainty but only to a certain degree, because the better we understand nature, the more we understand the contingency of natural systems. We can expect better explanations of natural phenomena from science, but it would be unfair to ask science to reduce greater understanding to a hard figure that we can plug into our compensation equation.

Still, greater complexity arises when separating SRM effects from policy effects at the local and regional level. Some countries will surely organize better than others to manage this change, and preparation will be a factor in determining the magnitude of gains or losses. Inherent to the problem of estimating gains and losses from SRM is the inescapable subjective element of assessing preparation. 

The politics of compensation

Advocates of geoengineering tell us that their advocacy is not about deploying SRM; rather, it is about better understanding the scientific facts before we even consider deployment. It’s tempting to believe that the accumulating science on SRM effects would be helpful. But when we consider the factors I just described above, it is quite possible that more science will also crystalize the uncertainty about exact amounts of compensation. The calculus of gain or loss, or the difference between the reality and a counterfactual of what regions and countries will experience requires certainty, but science only yields irreducible uncertainty about nature.

The epistemic problems with estimating compensation are only to be compounded by the political contestation of those numbers. Even within the scientific community, different climate models will yield different results, and since economic compensation is derived from those models’ output, we can expect a serious contestation of the objectivity of the science of SRM impact estimation. Who should formulate the equation? Who should feed the numbers into it? A sure way to alienate scientists from the peoples of the world is to ask them to assert their cognitive authority over this calculus. 

What’s more, other parts of the compensation equation related to regional efforts to deal with SRM effect are inherently subjective. We should not forget the politics of asserting compensation commensurate to preparation effort; countries that experience low losses may also want compensation for their efforts preparing and coping with natural disasters.

Not only would a compensation equation be a sham, it would be unmanageable. Its legitimacy would always be in question. The calculus of compensation may seem a way to circumvent the impasses of politics and define fairness mathematically. Ironically, it is shot through with subjectivity; is truly a political exercise.

Can we do without compensation?

Technological innovations are similar to legislative acts, observed Langdon Winner.3 Technical choices of the earliest stage in technical design quickly “become strongly fixed in material equipment, economic investment, and social habit, [and] the original flexibility vanishes for all practical purposes once the initial commitments are made.” For that reason, he insisted, "the same careful attention one would give to the rules, roles, and relationships of politics must also be given to such things as the building of highways, the creation of television networks, and the tailoring of seeming insignificant features on new machines."

If technological change can be thought of as legislative change, we must consider how such a momentous technology as SRM can be deployed in a manner consonant with our democratic values. Engineering the planet’s weather is nothing short of passing an amendment to Planet Earth’s Constitution. One pesky clause in that constitutional amendment is a fair compensation scheme. It seems so small a clause in comparison to the extent of the intervention, the governance of deployment and consequences, and the international commitments to be made as a condition for deployment (such as emissions mitigation and adaptation to climate change). But in the short consideration afforded here, we get a glimpse of the intractable political problem of setting up a compensation scheme. And yet, if the clause were not approved by a majority of nations, a fair compensation scheme has little hope to be consonant with democratic aspirations.


1McClellan, Justin, David W Keith, Jay Apt. 2012. Cost analysis of stratospheric albedo modification delivery systems. Environmental Research Letters 7(3): 1-8.

2Pongratz, Julia, D. B. Lobell, L. Cao, K. Caldeira. 2012. Nature Climate Change 2, 101–105.

3Winner, Langdon. 1980. Do artifacts have politics? Daedalus (109) 1: 121-136.

Image Source: © Antara Photo Agency / Reuters
      
 
 




com

Global economic and environmental outcomes of the Paris Agreement

The Paris Agreement, adopted by the Parties to the United Nations Framework Convention on Climate Change (UNFCCC) in 2015, has now been signed by 197 countries. It entered into force in 2016. The agreement established a process for moving the world toward stabilizing greenhouse gas (GHG) concentrations at a level that would avoid dangerous climate…

       




com

Policy insights from comparing carbon pricing modeling scenarios

Carbon pricing is an important policy tool for reducing greenhouse gas pollution. The Stanford Energy Modeling Forum exercise 32 convened eleven modeling teams to project emissions, energy, and economic outcomes of an illustrative range of economy-wide carbon price policies. The study compared a coordinated reference scenario involving no new policies with policy scenarios that impose…

       




com

The risk of fiscal collapse in coal-reliant communities

EXECUTIVE SUMMARY If the United States undertakes actions to address the risks of climate change, the use of coal in the power sector will decline rapidly. This presents major risks to the 53,000 US workers employed by the industry and their communities. 26 US counties are classified as “coal-mining dependent,” meaning the coal industry is…

       




com

Columbia Energy Exchange: Coal communities face risk of fiscal collapse

       




com

The risk of fiscal collapse in coal-reliant communities

       




com

Modeling community efforts to reduce childhood obesity

Why childhood obesity matters According to the latest data, childhood obesity affects nearly 1 in 5 children in the United States, a number which has more than tripled since the early 1970s. Children who have obesity are at a higher risk of many immediate health risks such as high blood pressure and high cholesterol, type…

       




com

Development of a computational modeling laboratory for examining tobacco control policies: Tobacco Town

       




com

Holding our own: Is the future of Islam in the West communal?

       




com

Why France? Understanding terrorism’s many (and complicated) causes


The terrible attack in Nice on July 14—Bastille Day—saddened us all. For a country that has done so much historically to promote democracy and human rights at home and abroad, France is paying a terrible and unfair price, even more than most countries. My colleagues Will McCants and Chris Meserole have carefully documented the toll that France, and certain other Francophone countries like Belgium, have suffered in recent years from global terrorism. It is heart wrenching.

From what we know so far, the attack was carried out by a deeply distraught, potentially deranged, and in any case extremely brutal local man from Nice of Tunisian descent and French nationality. Marital problems, the recent loss of his job, and a general sense of personal unhappiness seem to have contributed to the state of mind that led him to commit this heinous atrocity. Perhaps we will soon learn that ISIS, directly or indirectly, inspired the attack in one way or another as well. My colleague Dan Byman has already tapped into his deep expertise about terrorism to remind us that ISIS had in fact encouraged ramming attacks with vehicles before, even if the actual manifestation of such tactics in this case was mostly new. 

This attack will again raise the question: Why France? On this point, I do have a somewhat different take than some of my colleagues. The argument that France has partly brought these tragedies upon itself—perhaps because of its policies of secularism and in particular its limitations on when and where women can wear the veil in France—strikes me as unpersuasive. Its logical policy implications are also potentially disturbing, because if interpreted wrongly, it could lead to a debate on whether France should modify such policies so as to make itself less vulnerable to terrorism. That outcome, even if unintended, could dance very close to the line of encouraging appeasement of heinous acts of violence with policy changes that run counter to much of what French culture and society would otherwise favor. So I feel the need to push back.

Here are some of the arguments, as I see them, against blaming French culture or policy for this recent string of horrible attacks including the Charlie Hebdo massacre, the November 2015 mass shootings in Paris, and the Nice tragedy (as well as recent attacks in Belgium):

  • Starting with the simplest point, we still do not know much about the perpetrator of the Nice killings. From what we do surmise so far, personal problems appear to be largely at the root of the violence—different from, but not entirely unlike, the case with the Orlando shooter, Omar Mateen.
  • We need to be careful about drawing implications from a small number of major attacks. Since 2000, there have also been major attacks in the Western world by extremist jihadis or takfiris in New York, Washington, Spain, London, San Bernardino, Orlando, and Russia. None of these are Francophone. Even Belgium is itself a mixed country, linguistically and culturally.
  • Partly for reasons of geography, as well as history, France does face a larger problem than some other European countries of individuals leaving its country to go to Syria or Iraq to fight for ISIS, and then returning. But it is hardly unique in the scale of this problem.
  • Continental Europe has a specific additional problem that is not as widely shared in the United Kingdom or the United States: Its criminal networks largely overlap with its extremist and/or terrorist networks. This point may be irrelevant to the Nice attack, but more widely, extremists in France or Belgium can make use of illicit channels for moving people, money, and weapons that are less available to would-be jihadis in places like the U.K. (where the criminal networks have more of a Caribbean and sub-Saharan African character, meaning they overlap less with extremist networks).
  • Of course, the greatest numbers of terrorist attacks by Muslim extremists occur in the broader Muslim world, with Muslims as the primary victims—from Iraq and Syria to Libya and Yemen and Somalia to South Asia. French domestic policies have no bearing on these, of course.

There is no doubt that good work by counterterrorism and intelligence forces is crucial to preventing future attacks. France has done well in this regard—though it surely can do better, and it is surely trying to get better. There is also no doubt that promoting social cohesion in a broad sense is a worthy goal. But I would hesitate, personally, to attribute any apparent trend line in major attacks in the West to a particular policy of a country like France—especially when the latter is in fact doing much to seek to build bridges, as a matter of national policy, with Muslims at home and abroad. 

There is much more to do in promoting social cohesion, to be sure, even here in America (though our own problems probably center more on race than on religion at the moment). But the Nice attacker almost assuredly didn’t attack because his estranged wife couldn’t wear a veil in the manner and/or places she wanted. At a moment like this in particular, I disagree with insinuations to the contrary.

      
 
 




com

Webinar: Electricity Discoms in India post-COVID-19: Untangling the short-run from the “new normal”

https://www.youtube.com/watch?v=u6-PSpx4dqU India’s electricity grid’s most complex and perhaps most critical layer is the distribution companies (Discoms) that retail electricity to consumers. They have historically faced numerous challenges of high losses, both financial and operational. COVID-19 has imposed new challenges on the entire sector, but Discoms are the lynchpin of the system.  In a panel discussion…

       




com

District Mineral Foundation funds crucial resource for ensuring income security in mining areas post COVID-19

The Prime Minister of India held a meeting on April 30, 2020 to consider reforms in the mines and coal sector to jump-start the Indian economy in the backdrop of COVID-19. The mining sector, which is a primary supplier of raw materials to the manufacturing and infrastructure sectors, is being considered to play a crucial…

       




com

Podcast | Comparative politics & international relations: Lessons for Indian foreign policy

       




com

An accident of geography: Compassion, innovation, and the fight against poverty—A conversation with Richard C. Blum

Over the past 20 years, the proportion of the world population living in extreme poverty has decreased by over 60 percent, a remarkable achievement. Yet further progress requires expanded development finance and more innovative solutions for raising shared prosperity and ending extreme poverty. In his new book, “An Accident of Geography: Compassion, Innovation and the […]

      
 
 




com

Modeling community efforts to reduce childhood obesity

Why childhood obesity matters According to the latest data, childhood obesity affects nearly 1 in 5 children in the United States, a number which has more than tripled since the early 1970s. Children who have obesity are at a higher risk of many immediate health risks such as high blood pressure and high cholesterol, type…

       




com

Development of a computational modeling laboratory for examining tobacco control policies: Tobacco Town

       




com

To lead in a complex world, cities need to get back to basics

To adapt to the growing leadership demands of a world in flux, cities need a strong grasp of the fundamentals of urban governance and finance—and an understanding of how to improve them. Since launching The Project a little more than a year ago, the world has changed in dramatic ways. Yet with power balances in…

       




com

First Steps Toward a Quality of Climate Finance Scorecard (QUODA-CF): Creating a Comparative Index to Assess International Climate Finance Contributions

Executive Summary Are climate finance contributor countries, multilateral aid agencies and specialized funds using widely accepted best practices in foreign assistance? How is it possible to measure and compare international climate finance contributions when there are as yet no established metrics or agreed definitions of the quality of climate finance? As a subjective metric, quality…

       




com

Welcoming Czech Finance Minister Andrej Babis


Last Thursday was finance minister day at Brookings, with three separate visits from European finance ministers who were in town for the IMF meetings. Here in Governance Studies, we were delighted to have the opportunity to host Czech Finance Minister and Deputy Prime Minister Andrej Babis for a wide-ranging conversation with our scholars, including Darrell West, Bill Galston, John Hudak, and myself, as well as Bill Drozdiak of Brookings' Center on the United States and Europe and Jeff Gedmin of Georgetown University. Brookings has a long tradition of welcoming distinguished European visitors, and so contributing to the strengthening of transatlantic ties. That is particularly important now, as Europe confronts the destabilizing effects of Russia's aggression in Ukraine, the Greek debt crisis, be continuing after effects of the great recession, and multiple other challenges. We were honored to host Minister Babis and we look forward to many more visits here from leaders of our close U.S. ally, the Czech Republic.

(Photo credit: Embassy of the Czech Republic)

Authors

Image Source: © Mike Theiler / Reuters
      




com

Welcoming member of Knesset Erel Margalit to Brookings


One of the great parts of being at Brookings has been the many champions of government reform in the US and around the world who have reached out to visit us here, meet me and my colleagues, and talk about how best to transform government and make it work better for people. The latest was MK Erel Margalit, who before joining the Israeli Knesset started a leading venture capital firm in Israel (and was the first Israeli to make the Forbes Midas list of top tech investors globally). My Brookings colleagues, including Elaine Kamarck, Bill Galston, Natan Sachs and John Hudak talked with MK Margalit about the lessons he learned in the private sector, and about his efforts to bring those lessons to his work in government. 

Coming not long after our meeting with Czech Deputy Prime Minister and Finance Minister Andre Babis, who enjoyed similar success in business and has ambitious reform goals of his own informed by his business career, it was fascinating to talk about what does and does not translate to the government sector. MK Margalit’s focus includes supporting peace and economic development by developing enterprise zones in and around Israel that encourage economic partnerships between Jewish and Arab Israelis and their businesses, and that include Palestinians as well. It was an impressive melding of business and government methodologies. The meeting built on similar ones we have had with other innovators including CFPB Director Rich Cordray, former Mayor and Governor Martin O’Malley, and of course DPM Babis, all of whom have in common innovating to make government function more effectively.

Authors

Image Source: © Ronen Zvulun / Reuters
      




com

How Promise programs can help former industrial communities

The nation is seeing accelerating gaps in economic opportunity and prosperity between more educated, tech-savvy, knowledge workers congregating in the nation’s “superstar” cities (and a few university-town hothouses) and residents of older industrial cities and the small towns of “flyover country.” These growing divides are shaping public discourse, as policymakers and thought leaders advance recipes…

       




com

High Achievers, Tracking, and the Common Core


A curriculum controversy is roiling schools in the San Francisco Bay Area.  In the past few months, parents in the San Mateo-Foster City School District, located just south of San Francisco International Airport, voiced concerns over changes to the middle school math program. The changes were brought about by the Common Core State Standards (CCSS).  Under previous policies, most eighth graders in the district took algebra I.  Some very sharp math students, who had already completed algebra I in seventh grade, took geometry in eighth grade. The new CCSS-aligned math program will reduce eighth grade enrollments in algebra I and eliminate geometry altogether as a middle school course. 

A little background information will clarify the controversy.  Eighth grade mathematics may be the single grade-subject combination most profoundly affected by the CCSS.  In California, the push for most students to complete algebra I by the end of eighth grade has been a centerpiece of state policy, as it has been in several states influenced by the “Algebra for All” movement that began in the 1990s.  Nationwide, in 1990, about 16 percent of all eighth graders reported that they were taking an algebra or geometry course.  In 2013, the number was three times larger, and nearly half of all eighth graders (48 percent) were taking algebra or geometry.[i]  When that percentage goes down, as it is sure to under the CCSS, what happens to high achieving math students?

The parents who are expressing the most concern have kids who excel at math.  One parent in San Mateo-Foster City told The San Mateo Daily Journal, “This is really holding the advanced kids back.”[ii] The CCSS math standards recommend a single math course for seventh grade, integrating several math topics, followed by a similarly integrated math course in eighth grade.  Algebra I won’t be offered until ninth grade.  The San Mateo-Foster City School District decided to adopt a “three years into two” accelerated option.  This strategy is suggested on the Common Core website as an option that districts may consider for advanced students.  It combines the curriculum from grades seven through nine (including algebra I) into a two year offering that students can take in seventh and eighth grades.[iii]  The district will also provide—at one school site—a sequence beginning in sixth grade that compacts four years of math into three.  Both accelerated options culminate in the completion of algebra I in eighth grade.

The San Mateo-Foster City School District is home to many well-educated, high-powered professionals who work in Silicon Valley.  They are unrelentingly liberal in their politics.  Equity is a value they hold dear.[iv]  They also know that completing at least one high school math course in middle school is essential for students who wish to take AP Calculus in their senior year of high school.  As CCSS is implemented across the nation, administrators in districts with demographic profiles similar to San Mateo-Foster City will face parents of mathematically precocious kids asking whether the “common” in Common Core mandates that all students take the same math course.  Many of those districts will respond to their constituents and provide accelerated pathways (“pathway” is CCSS jargon for course sequence). 

But other districts will not.  Data show that urban schools, schools with large numbers of black and Hispanic students, and schools located in impoverished neighborhoods are reluctant to differentiate curriculum.  It is unlikely that gifted math students in those districts will be offered an accelerated option under CCSS.  The reason why can be summed up in one word: tracking.

Tracking in eighth grade math means providing different courses to students based on their prior math achievement.  The term “tracking” has been stigmatized, coming under fire for being inequitable.  Historically, where tracking existed, black, Hispanic, and disadvantaged students were often underrepresented in high-level math classes; white, Asian, and middle-class students were often over-represented.  An anti-tracking movement gained a full head of steam in the 1980s.  Tracking reformers knew that persuading high schools to de-track was hopeless.  Consequently, tracking’s critics focused reform efforts on middle schools, urging that they group students heterogeneously with all students studying a common curriculum.  That approach took hold in urban districts, but not in the suburbs.

Now the Common Core and de-tracking are linked.  Providing an accelerated math track for high achievers has become a flashpoint throughout the San Francisco Bay Area.  An October 2014 article in The San Jose Mercury News named Palo Alto, Saratoga, Cupertino, Pleasanton, and Los Gatos as districts that have announced, in response to parent pressure, that they are maintaining an accelerated math track in middle schools.  These are high-achieving, suburban districts.  Los Gatos parents took to the internet with a petition drive when a rumor spread that advanced courses would end.  Ed Source reports that 900 parents signed a petition opposing the move and board meetings on the issue were packed with opponents. The accelerated track was kept.  Piedmont established a single track for everyone, but allowed parents to apply for an accelerated option.  About twenty five percent did so.  The Mercury News story underscores the demographic pattern that is unfolding and asks whether CCSS “could cement a two-tier system, with accelerated math being the norm in wealthy areas and the exception elsewhere.”

What is CCSS’s real role here?  Does the Common Core take an explicit stand on tracking?  Not really.  But de-tracking advocates can interpret the “common” in Common Core as license to eliminate accelerated tracks for high achievers.  As a noted CCSS supporter (and tracking critic), William H. Schmidt, has stated, “By insisting on common content for all students at each grade level and in every community, the Common Core mathematics standards are in direct conflict with the concept of tracking.”[v]  Thus, tracking joins other controversial curricular ideas—e.g., integrated math courses instead of courses organized by content domains such as algebra and geometry; an emphasis on “deep,” conceptual mathematics over learning procedures and basic skills—as “dog whistles” embedded in the Common Core.  Controversial positions aren’t explicitly stated, but they can be heard by those who want to hear them.    

CCSS doesn’t have to take an outright stand on these debates in order to have an effect on policy.  For the practical questions that local grouping policies resolve—who takes what courses and when do they take them—CCSS wipes the slate clean.  There are plenty of people ready to write on that blank slate, particularly administrators frustrated by unsuccessful efforts to de-track in the past

Suburban parents are mobilized in defense of accelerated options for advantaged students.  What about kids who are outstanding math students but also happen to be poor, black, or Hispanic?  What happens to them, especially if they attend schools in which the top institutional concern is meeting the needs of kids functioning several years below grade level?  I presented a paper on this question at a December 2014 conference held by the Fordham Institute in Washington, DC.  I proposed a pilot program of “tracking for equity.”  By that term, I mean offering black, Hispanic, and poor high achievers the same opportunity that the suburban districts in the Bay Area are offering.  High achieving middle school students in poor neighborhoods would be able to take three years of math in two years and proceed on a path toward AP Calculus as high school seniors.

It is true that tracking must be done carefully.  Tracking can be conducted unfairly and has been used unjustly in the past.  One of the worst consequences of earlier forms of tracking was that low-skilled students were tracked into dead end courses that did nothing to help them academically.  These low-skilled students were disproportionately from disadvantaged communities or communities of color.  That’s not a danger in the proposal I am making.  The default curriculum, the one every student would take if not taking the advanced track, would be the Common Core.  If that’s a dead end for low achievers, Common Core supporters need to start being more honest in how they are selling the CCSS.  Moreover, to ensure that the policy gets to the students for whom it is intended, I have proposed running the pilot program in schools predominantly populated by poor, black, or Hispanic students.  The pilot won’t promote segregation within schools because the sad reality is that participating schools are already segregated.

Since I presented the paper, I have privately received negative feedback from both Algebra for All advocates and Common Core supporters.  That’s disappointing.  Because of their animus toward tracking, some critics seem to support a severe policy swing from Algebra for All, which was pursued for equity, to Algebra for None, which will be pursued for equity.  It’s as if either everyone or no one should be allowed to take algebra in eighth grade.  The argument is that allowing only some eighth graders to enroll in algebra is elitist, even if the students in question are poor students of color who are prepared for the course and likely to benefit from taking it.

The controversy raises crucial questions about the Common Core.  What’s common in the common core?  Is it the curriculum?  And does that mean the same curriculum for all?  Will CCSS serve as a curricular floor, ensuring all students are exposed to a common body of knowledge and skills?  Or will it serve as a ceiling, limiting the progress of bright students so that their achievement looks more like that of their peers?  These questions will be answered differently in different communities, and as they are, the inequities that Common Core supporters think they’re addressing may surface again in a profound form.   



[i] Loveless, T. (2008). The 2008 Brown Center Report on American Education. Retrieved from http://www.brookings.edu/research/reports/2009/02/25-education-loveless. For San Mateo-Foster City’s sequence of math courses, see: page 10 of http://smfc-ca.schoolloop.com/file/1383373423032/1229222942231/1242346905166154769.pdf 

[ii] Swartz, A. (2014, November 22). “Parents worry over losing advanced math classes: San Mateo-Foster City Elementary School District revamps offerings because of Common Core.” San Mateo Daily Journal. Retrieved from http://www.smdailyjournal.com/articles/lnews/2014-11-22/parents-worry-over-losing-advanced-math-classes-san-mateo-foster-city-elementary-school-district-revamps-offerings-because-of-common-core/1776425133822.html

[iii] Swartz, A. (2014, December 26). “Changing Classes Concern for parents, teachers: Administrators say Common Core Standards Reason for Modifications.” San Mateo Daily Journal. Retrieved from http://www.smdailyjournal.com/articles/lnews/2014-12-26/changing-classes-concern-for-parents-teachers-administrators-say-common-core-standards-reason-for-modifications/1776425135624.html

[iv] In the 2014 election, Jerry Brown (D) took 75% of Foster City’s votes for governor.  In the 2012 presidential election, Barak Obama received 71% of the vote. http://www.city-data.com/city/Foster-City-California.html

[v] Schmidt, W.H. and Burroughs, N.A. (2012) “How the Common Core Boosts Quality and Equality.” Educational Leadership, December 2012/January 2013. Vol. 70, No. 4, pp. 54-58.

Authors

     
 
 




com

Measuring effects of the Common Core


Part II of the 2015 Brown Center Report on American Education

Over the next several years, policy analysts will evaluate the impact of the Common Core State Standards (CCSS) on U.S. education.  The task promises to be challenging.  The question most analysts will focus on is whether the CCSS is good or bad policy.  This section of the Brown Center Report (BCR) tackles a set of seemingly innocuous questions compared to the hot-button question of whether Common Core is wise or foolish.  The questions all have to do with when Common Core actually started, or more precisely, when the Common Core started having an effect on student learning.  And if it hasn’t yet had an effect, how will we know that CCSS has started to influence student achievement? 

The analysis below probes this issue empirically, hopefully persuading readers that deciding when a policy begins is elemental to evaluating its effects.  The question of a policy’s starting point is not always easy to answer.  Yet the answer has consequences.  You can’t figure out whether a policy worked or not unless you know when it began.[i] 

The analysis uses surveys of state implementation to model different CCSS starting points for states and produces a second early report card on how CCSS is doing.  The first report card, focusing on math, was presented in last year’s BCR.  The current study updates state implementation ratings that were presented in that report and extends the analysis to achievement in reading.  The goal is not only to estimate CCSS’s early impact, but also to lay out a fair approach for establishing when the Common Core’s impact began—and to do it now before data are generated that either critics or supporters can use to bolster their arguments.  The experience of No Child Left Behind (NCLB) illustrates this necessity.

Background

After the 2008 National Assessment of Educational Progress (NAEP) scores were released, former Secretary of Education Margaret Spellings claimed that the new scores showed “we are on the right track.”[ii] She pointed out that NAEP gains in the previous decade, 1999-2009, were much larger than in prior decades.  Mark Schneider of the American Institutes of Research (and a former Commissioner of the National Center for Education Statistics [NCES]) reached a different conclusion. He compared NAEP gains from 1996-2003 to 2003-2009 and declared NCLB’s impact disappointing.  “The pre-NCLB gains were greater than the post-NCLB gains.”[iii]  It is important to highlight that Schneider used the 2003 NAEP scores as the starting point for assessing NCLB.  A report from FairTest on the tenth anniversary of NCLB used the same demarcation for pre- and post-NCLB time frames.[iv]  FairTest is an advocacy group critical of high stakes testing—and harshly critical of NCLB—but if the 2003 starting point for NAEP is accepted, its conclusion is indisputable, “NAEP score improvement slowed or stopped in both reading and math after NCLB was implemented.” 

Choosing 2003 as NCLB’s starting date is intuitively appealing.  The law was introduced, debated, and passed by Congress in 2001.  President Bush signed NCLB into law on January 8, 2002.  It takes time to implement any law.  The 2003 NAEP is arguably the first chance that the assessment had to register NCLB’s effects. 

Selecting 2003 is consequential, however.  Some of the largest gains in NAEP’s history were registered between 2000 and 2003.  Once 2003 is established as a starting point (or baseline), pre-2003 gains become “pre-NCLB.”  But what if the 2003 NAEP scores were influenced by NCLB? Experiments evaluating the effects of new drugs collect baseline data from subjects before treatment, not after the treatment has begun.   Similarly, evaluating the effects of public policies require that baseline data are not influenced by the policies under evaluation.   

Avoiding such problems is particularly difficult when state or local policies are adopted nationally.  The federal effort to establish a speed limit of 55 miles per hour in the 1970s is a good example.  Several states already had speed limits of 55 mph or lower prior to the federal law’s enactment.  Moreover, a few states lowered speed limits in anticipation of the federal limit while the bill was debated in Congress.  On the day President Nixon signed the bill into law—January 2, 1974—the Associated Press reported that only 29 states would be required to lower speed limits.  Evaluating the effects of the 1974 law with national data but neglecting to adjust for what states were already doing would obviously yield tainted baseline data.

There are comparable reasons for questioning 2003 as a good baseline for evaluating NCLB’s effects.  The key components of NCLB’s accountability provisions—testing students, publicizing the results, and holding schools accountable for results—were already in place in nearly half the states.  In some states they had been in place for several years.  The 1999 iteration of Quality Counts, Education Week’s annual report on state-level efforts to improve public education, entitled Rewarding Results, Punishing Failure, was devoted to state accountability systems and the assessments underpinning them. Testing and accountability are especially important because they have drawn fire from critics of NCLB, a law that wasn’t passed until years later.

The Congressional debate of NCLB legislation took all of 2001, allowing states to pass anticipatory policies.  Derek Neal and Diane Whitmore Schanzenbach reported that “with the passage of NCLB lurking on the horizon,” Illinois placed hundreds of schools on a watch list and declared that future state testing would be high stakes.[v] In the summer and fall of 2002, with NCLB now the law of the land, state after state released lists of schools falling short of NCLB’s requirements.  Then the 2002-2003 school year began, during which the 2003 NAEP was administered.  Using 2003 as a NAEP baseline assumes that none of these activities—previous accountability systems, public lists of schools in need of improvement, anticipatory policy shifts—influenced achievement.  That is unlikely.[vi]

The Analysis

Unlike NCLB, there was no “pre-CCSS” state version of Common Core.  States vary in how quickly and aggressively they have implemented CCSS.  For the BCR analyses, two indexes were constructed to model CCSS implementation.  They are based on surveys of state education agencies and named for the two years that the surveys were conducted.  The 2011 survey reported the number of programs (e.g., professional development, new materials) on which states reported spending federal funds to implement CCSS.  Strong implementers spent money on more activities.  The 2011 index was used to investigate eighth grade math achievement in the 2014 BCR.  A new implementation index was created for this year’s study of reading achievement.  The 2013 index is based on a survey asking states when they planned to complete full implementation of CCSS in classrooms.  Strong states aimed for full implementation by 2012-2013 or earlier.      

Fourth grade NAEP reading scores serve as the achievement measure.  Why fourth grade and not eighth?  Reading instruction is a key activity of elementary classrooms but by eighth grade has all but disappeared.  What remains of “reading” as an independent subject, which has typically morphed into the study of literature, is subsumed under the English-Language Arts curriculum, a catchall term that also includes writing, vocabulary, listening, and public speaking.  Most students in fourth grade are in self-contained classes; they receive instruction in all subjects from one teacher.  The impact of CCSS on reading instruction—the recommendation that non-fiction take a larger role in reading materials is a good example—will be concentrated in the activities of a single teacher in elementary schools. The burden for meeting CCSS’s press for non-fiction, on the other hand, is expected to be shared by all middle and high school teachers.[vii] 

Results

Table 2-1 displays NAEP gains using the 2011 implementation index.  The four year period between 2009 and 2013 is broken down into two parts: 2009-2011 and 2011-2013.  Nineteen states are categorized as “strong” implementers of CCSS on the 2011 index, and from 2009-2013, they outscored the four states that did not adopt CCSS by a little more than one scale score point (0.87 vs. -0.24 for a 1.11 difference).  The non-adopters are the logical control group for CCSS, but with only four states in that category—Alaska, Nebraska, Texas, and Virginia—it is sensitive to big changes in one or two states.  Alaska and Texas both experienced a decline in fourth grade reading scores from 2009-2013.

The 1.11 point advantage in reading gains for strong CCSS implementers is similar to the 1.27 point advantage reported last year for eighth grade math.  Both are small.  The reading difference in favor of CCSS is equal to approximately 0.03 standard deviations of the 2009 baseline reading score.  Also note that the differences were greater in 2009-2011 than in 2011-2013 and that the “medium” implementers performed as well as or better than the strong implementers over the entire four year period (gain of 0.99).

Table 2-2 displays calculations using the 2013 implementation index.  Twelve states are rated as strong CCSS implementers, seven fewer than on the 2011 index.[viii]  Data for the non-adopters are the same as in the previous table.  In 2009-2013, the strong implementers gained 1.27 NAEP points compared to -0.24 among the non-adopters, a difference of 1.51 points.  The thirty-four states rated as medium implementers gained 0.82.  The strong implementers on this index are states that reported full implementation of CCSS-ELA by 2013.  Their larger gain in 2011-2013 (1.08 points) distinguishes them from the strong implementers in the previous table.  The overall advantage of 1.51 points over non-adopters represents about 0.04 standard deviations of the 2009 NAEP reading score, not a difference with real world significance.  Taken together, the 2011 and 2013 indexes estimate that NAEP reading gains from 2009-2013 were one to one and one-half scale score points larger in the strong CCSS implementation states compared to the states that did not adopt CCSS.

Common Core and Reading Content

As noted above, the 2013 implementation index is based on when states scheduled full implementation of CCSS in classrooms.  Other than reading achievement, does the index seem to reflect changes in any other classroom variable believed to be related to CCSS implementation?  If the answer is “yes,” that would bolster confidence that the index is measuring changes related to CCSS implementation. 

Let’s examine the types of literature that students encounter during instruction.  Perhaps the most controversial recommendation in the CCSS-ELA standards is the call for teachers to shift the content of reading materials away from stories and other fictional forms of literature in favor of more non-fiction.  NAEP asks fourth grade teachers the extent to which they teach fiction and non-fiction over the course of the school year (see Figure 2-1). 

Historically, fiction dominates fourth grade reading instruction.  It still does.  The percentage of teachers reporting that they teach fiction to a “large extent” exceeded the percentage answering “large extent” for non-fiction by 23 points in 2009 and 25 points in 2011.  In 2013, the difference narrowed to only 15 percentage points, primarily because of non-fiction’s increased use.  Fiction still dominated in 2013, but not by as much as in 2009.

The differences reported in Table 2-3 are national indicators of fiction’s declining prominence in fourth grade reading instruction.  What about the states?  We know that they were involved to varying degrees with the implementation of Common Core from 2009-2013.  Is there evidence that fiction’s prominence was more likely to weaken in states most aggressively pursuing CCSS implementation? 

Table 2-3 displays the data tackling that question.  Fourth grade teachers in strong implementation states decisively favored the use of fiction over non-fiction in 2009 and 2011.  But the prominence of fiction in those states experienced a large decline in 2013 (-12.4 percentage points).  The decline for the entire four year period, 2009-2013, was larger in the strong implementation states (-10.8) than in the medium implementation (-7.5) or non-adoption states (-9.8).  

Conclusion

This section of the Brown Center Report analyzed NAEP data and two indexes of CCSS implementation, one based on data collected in 2011, the second from data collected in 2013.  NAEP scores for 2009-2013 were examined.  Fourth grade reading scores improved by 1.11 scale score points in states with strong implementation of CCSS compared to states that did not adopt CCSS.  A similar comparison in last year’s BCR found a 1.27 point difference on NAEP’s eighth grade math test, also in favor of states with strong implementation of CCSS.  These differences, although certainly encouraging to CCSS supporters, are quite small, amounting to (at most) 0.04 standard deviations (SD) on the NAEP scale.  A threshold of 0.20 SD—five times larger—is often invoked as the minimum size for a test score change to be regarded as noticeable.  The current study’s findings are also merely statistical associations and cannot be used to make causal claims.  Perhaps other factors are driving test score changes, unmeasured by NAEP or the other sources of data analyzed here. 

The analysis also found that fourth grade teachers in strong implementation states are more likely to be shifting reading instruction from fiction to non-fiction texts.  That trend should be monitored closely to see if it continues.  Other events to keep an eye on as the Common Core unfolds include the following:

1.  The 2015 NAEP scores, typically released in the late fall, will be important for the Common Core.  In most states, the first CCSS-aligned state tests will be given in the spring of 2015.  Based on the earlier experiences of Kentucky and New York, results are expected to be disappointing.  Common Core supporters can respond by explaining that assessments given for the first time often produce disappointing results.  They will also claim that the tests are more rigorous than previous state assessments.  But it will be difficult to explain stagnant or falling NAEP scores in an era when implementing CCSS commands so much attention.   

2.  Assessment will become an important implementation variable in 2015 and subsequent years.  For analysts, the strategy employed here, modeling different indicators based on information collected at different stages of implementation, should become even more useful.  Some states are planning to use Smarter Balanced Assessments, others are using the Partnership for Assessment of Readiness for College and Careers (PARCC), and still others are using their own homegrown tests.   To capture variation among the states on this important dimension of implementation, analysts will need to use indicators that are up-to-date.

3.  The politics of Common Core injects a dynamic element into implementation.  The status of implementation is constantly changing.  States may choose to suspend, to delay, or to abandon CCSS.  That will require analysts to regularly re-configure which states are considered “in” Common Core and which states are “out.”  To further complicate matters, states may be “in” some years and “out” in others.

A final word.  When the 2014 BCR was released, many CCSS supporters commented that it is too early to tell the effects of Common Core.  The point that states may need more time operating under CCSS to realize its full effects certainly has merit.  But that does not discount everything states have done so far—including professional development, purchasing new textbooks and other instructional materials, designing new assessments, buying and installing computer systems, and conducting hearings and public outreach—as part of implementing the standards.  Some states are in their fifth year of implementation.  It could be that states need more time, but innovations can also produce their biggest “pop” earlier in implementation rather than later.  Kentucky was one of the earliest states to adopt and implement CCSS.  That state’s NAEP fourth grade reading score declined in both 2009-2011 and 2011-2013.  The optimism of CCSS supporters is understandable, but a one and a half point NAEP gain might be as good as it gets for CCSS.



[i] These ideas were first introduced in a 2013 Brown Center Chalkboard post I authored, entitled, “When Does a Policy Start?”

[ii] Maria Glod, “Since NCLB, Math and Reading Scores Rise for Ages 9 and 13,” Washington Post, April 29, 2009.

[iii] Mark Schneider, “NAEP Math Results Hold Bad News for NCLB,” AEIdeas (Washington, D.C.: American Enterprise Institute, 2009).

[iv] Lisa Guisbond with Monty Neill and Bob Schaeffer, NCLB’s Lost Decade for Educational Progress: What Can We Learn from this Policy Failure? (Jamaica Plain, MA: FairTest, 2012).

[v] Derek Neal and Diane Schanzenbach, “Left Behind by Design: Proficiency Counts and Test-Based Accountability,” NBER Working Paper No. W13293 (Cambridge: National Bureau of Economic Research, 2007), 13.

[vi] Careful analysts of NCLB have allowed different states to have different starting dates: see Thomas Dee and Brian A. Jacob, “Evaluating NCLB,” Education Next 10, no. 3 (Summer 2010); Manyee Wong, Thomas D. Cook, and Peter M. Steiner, “No Child Left Behind: An Interim Evaluation of Its Effects on Learning Using Two Interrupted Time Series Each with Its Own Non-Equivalent Comparison Series,” Working Paper 09-11 (Evanston, IL: Northwestern University Institute for Policy Research, 2009).

[vii] Common Core State Standards Initiative. “English Language Arts Standards, Key Design Consideration.” Retrieved from: http://www.corestandards.org/ELA-Literacy/introduction/key-design-consideration/

[viii] Twelve states shifted downward from strong to medium and five states shifted upward from medium to strong, netting out to a seven state swing.

« Part I: Girls, boys, and reading Part III: Student Engagement »

Downloads

Authors

     
 
 




com

Common Core and classroom instruction: The good, the bad, and the ugly


This post continues a series begun in 2014 on implementing the Common Core State Standards (CCSS).  The first installment introduced an analytical scheme investigating CCSS implementation along four dimensions:  curriculum, instruction, assessment, and accountability.  Three posts focused on curriculum.  This post turns to instruction.  Although the impact of CCSS on how teachers teach is discussed, the post is also concerned with the inverse relationship, how decisions that teachers make about instruction shape the implementation of CCSS.

A couple of points before we get started.  The previous posts on curriculum led readers from the upper levels of the educational system—federal and state policies—down to curricular decisions made “in the trenches”—in districts, schools, and classrooms.  Standards emanate from the top of the system and are produced by politicians, policymakers, and experts.  Curricular decisions are shared across education’s systemic levels.  Instruction, on the other hand, is dominated by practitioners.  The daily decisions that teachers make about how to teach under CCSS—and not the idealizations of instruction embraced by upper-level authorities—will ultimately determine what “CCSS instruction” really means.

I ended the last post on CCSS by describing how curriculum and instruction can be so closely intertwined that the boundary between them is blurred.  Sometimes stating a precise curricular objective dictates, or at least constrains, the range of instructional strategies that teachers may consider.  That post focused on English-Language Arts.  The current post focuses on mathematics in the elementary grades and describes examples of how CCSS will shape math instruction.  As a former elementary school teacher, I offer my own personal opinion on these effects.

The Good

Certain aspects of the Common Core, when implemented, are likely to have a positive impact on the instruction of mathematics. For example, Common Core stresses that students recognize fractions as numbers on a number line.  The emphasis begins in third grade:

CCSS.MATH.CONTENT.3.NF.A.2
Understand a fraction as a number on the number line; represent fractions on a number line diagram.

CCSS.MATH.CONTENT.3.NF.A.2.A
Represent a fraction 1/b on a number line diagram by defining the interval from 0 to 1 as the whole and partitioning it into b equal parts. Recognize that each part has size 1/b and that the endpoint of the part based at 0 locates the number 1/b on the number line.

CCSS.MATH.CONTENT.3.NF.A.2.B
Represent a fraction a/b on a number line diagram by marking off a lengths 1/b from 0. Recognize that the resulting interval has size a/b and that its endpoint locates the number a/b on the number line.


When I first read this section of the Common Core standards, I stood up and cheered.  Berkeley mathematician Hung-Hsi Wu has been working with teachers for years to get them to understand the importance of using number lines in teaching fractions.[1] American textbooks rely heavily on part-whole representations to introduce fractions.  Typically, students see pizzas and apples and other objects—typically other foods or money—that are divided up into equal parts.  Such models are limited.  They work okay with simple addition and subtraction.  Common denominators present a bit of a challenge, but ½ pizza can be shown to be also 2/4, a half dollar equal to two quarters, and so on. 

With multiplication and division, all the little tricks students learned with whole number arithmetic suddenly go haywire.  Students are accustomed to the fact that multiplying two whole numbers yields a product that is larger than either number being multiplied: 4 X 5 = 20 and 20 is larger than both 4 and 5.[2]  How in the world can ¼ X 1/5 = 1/20, a number much smaller than either 1/4or 1/5?  The part-whole representation has convinced many students that fractions are not numbers.  Instead, they are seen as strange expressions comprising two numbers with a small horizontal bar separating them. 

I taught sixth grade but occasionally visited my colleagues’ classes in the lower grades.  I recall one exchange with second or third graders that went something like this:

“Give me a number between seven and nine.”  Giggles. 

“Eight!” they shouted. 

“Give me a number between two and three.”  Giggles.

“There isn’t one!” they shouted. 

“Really?” I’d ask and draw a number line.  After spending some time placing whole numbers on the number line, I’d observe,  “There’s a lot of space between two and three.  Is it just empty?” 

Silence.  Puzzled little faces.  Then a quiet voice.  “Two and a half?”

You have no idea how many children do not make the transition to understanding fractions as numbers and because of stumbling at this crucial stage, spend the rest of their careers as students of mathematics convinced that fractions are an impenetrable mystery.   And  that’s not true of just students.  California adopted a test for teachers in the 1980s, the California Basic Educational Skills Test (CBEST).  Beginning in 1982, even teachers already in the classroom had to pass it.   I made a nice after-school and summer income tutoring colleagues who didn’t know fractions from Fermat’s Last Theorem.  To be fair, primary teachers, teaching kindergarten or grades 1-2, would not teach fractions as part of their math curriculum and probably hadn’t worked with a fraction in decades.  So they are no different than non-literary types who think Hamlet is just a play about a young guy who can’t make up his mind, has a weird relationship with his mother, and winds up dying at the end.

Division is the most difficult operation to grasp for those arrested at the part-whole stage of understanding fractions.  A problem that Liping Ma posed to teachers is now legendary.[3]

She asked small groups of American and Chinese elementary teachers to divide 1 ¾ by ½ and to create a word problem that illustrates the calculation.  All 72 Chinese teachers gave the correct answer and 65 developed an appropriate word problem.  Only nine of the 23 American teachers solved the problem correctly.  A single American teacher was able to devise an appropriate word problem.  Granted, the American sample was not selected to be representative of American teachers as a whole, but the stark findings of the exercise did not shock anyone who has worked closely with elementary teachers in the U.S.  They are often weak at math.  Many of the teachers in Ma’s study had vague ideas of an “invert and multiply” rule but lacked a conceptual understanding of why it worked.

A linguistic convention exacerbates the difficulty.  Students may cling to the mistaken notion that “dividing in half” means “dividing by one-half.”  It does not.  Dividing in half means dividing by two.  The number line can help clear up such confusion.  Consider a basic, whole-number division problem for which third graders will already know the answer:  8 divided by 2 equals 4.   It is evident that a segment 8 units in length (measured from 0 to 8) is divided by a segment 2 units in length (measured from 0 to 2) exactly 4 times.  Modeling 12 divided by 2 and other basic facts with 2 as a divisor will convince students that whole number division works quite well on a number line. 

Now consider the number ½ as a divisor.  It will become clear to students that 8 divided by ½ equals 16, and they can illustrate that fact on a number line by showing how a segment ½ units in length divides a segment 8 units in length exactly 16 times; it divides a segment 12 units in length 24 times; and so on.  Students will be relieved to discover that on a number line division with fractions works the same as division with whole numbers.

Now, let’s return to Liping Ma’s problem: 1 ¾ divided by ½.   This problem would not be presented in third grade, but it might be in fifth or sixth grades.  Students who have been working with fractions on a number line for two or three years will have little trouble solving it.  They will see that the problem simply asks them to divide a line segment of 1 3/4 units by a segment of ½ units.  The answer is 3 ½ .  Some students might estimate that the solution is between 3 and 4 because 1 ¾ lies between 1 ½ and 2, which on the number line are the points at which the ½ unit segment, laid end on end, falls exactly three and four times.  Other students will have learned about reciprocals and that multiplication and division are inverse operations.  They will immediately grasp that dividing by ½ is the same as multiplying by 2—and since 1 ¾ x 2 = 3 ½, that is the answer.  Creating a word problem involving string or rope or some other linearly measured object is also surely within their grasp.

Conclusion

I applaud the CCSS for introducing number lines and fractions in third grade.  I believe it will instill in children an important idea: fractions are numbers.  That foundational understanding will aid them as they work with more abstract representations of fractions in later grades.   Fractions are a monumental barrier for kids who struggle with math, so the significance of this contribution should not be underestimated.

I mentioned above that instruction and curriculum are often intertwined.  I began this series of posts by defining curriculum as the “stuff” of learning—the content of what is taught in school, especially as embodied in the materials used in instruction.  Instruction refers to the “how” of teaching—how teachers organize, present, and explain those materials.  It’s each teacher’s repertoire of instructional strategies and techniques that differentiates one teacher from another even as they teach the same content.  Choosing to use a number line to teach fractions is obviously an instructional decision, but it also involves curriculum.  The number line is mathematical content, not just a teaching tool.

Guiding third grade teachers towards using a number line does not guarantee effective instruction.  In fact, it is reasonable to expect variation in how teachers will implement the CCSS standards listed above.  A small body of research exists to guide practice. One of the best resources for teachers to consult is a practice guide published by the What Works Clearinghouse: Developing Effective Fractions Instruction for Kindergarten Through Eighth Grade (see full disclosure below).[4]  The guide recommends the use of number lines as its second recommendation, but it also states that the evidence supporting the effectiveness of number lines in teaching fractions is inferred from studies involving whole numbers and decimals.  We need much more research on how and when number lines should be used in teaching fractions.

Professor Wu states the following, “The shift of emphasis from models of a fraction in the initial stage to an almost exclusive model of a fraction as a point on the number line can be done gradually and gracefully beginning somewhere in grade four. This shift is implicit in the Common Core Standards.”[5]  I agree, but the shift is also subtle.  CCSS standards include the use of other representations—fraction strips, fraction bars, rectangles (which are excellent for showing multiplication of two fractions) and other graphical means of modeling fractions.  Some teachers will manage the shift to number lines adroitly—and others will not.  As a consequence, the quality of implementation will vary from classroom to classroom based on the instructional decisions that teachers make.  

The current post has focused on what I believe to be a positive aspect of CCSS based on the implementation of the standards through instruction.  Future posts in the series—covering the “bad” and the “ugly”—will describe aspects of instruction on which I am less optimistic.



[1] See H. Wu (2014). “Teaching Fractions According to the Common Core Standards,” https://math.berkeley.edu/~wu/CCSS-Fractions_1.pdf. Also see "What's Sophisticated about Elementary Mathematics?" http://www.aft.org/sites/default/files/periodicals/wu_0.pdf

[2] Students learn that 0 and 1 are exceptions and have their own special rules in multiplication.

[3] Liping Ma, Knowing and Teaching Elementary Mathematics.

[4] The practice guide can be found at: http://ies.ed.gov/ncee/wwc/pdf/practice_guides/fractions_pg_093010.pdf I serve as a content expert in elementary mathematics for the What Works Clearinghouse.  I had nothing to do, however, with the publication cited.

[5] Wu, page 3.

Authors

     
 
 




com

Implementing Common Core: The problem of instructional time


This is part two of my analysis of instruction and Common Core’s implementation.  I dubbed the three-part examination of instruction “The Good, The Bad, and the Ugly.”  Having discussed “the “good” in part one, I now turn to “the bad.”  One particular aspect of the Common Core math standards—the treatment of standard algorithms in whole number arithmetic—will lead some teachers to waste instructional time.

A Model of Time and Learning

In 1963, psychologist John B. Carroll published a short essay, “A Model of School Learning” in Teachers College Record.  Carroll proposed a parsimonious model of learning that expressed the degree of learning (or what today is commonly called achievement) as a function of the ratio of time spent on learning to the time needed to learn.     

The numerator, time spent learning, has also been given the term opportunity to learn.  The denominator, time needed to learn, is synonymous with student aptitude.  By expressing aptitude as time needed to learn, Carroll refreshingly broke through his era’s debate about the origins of intelligence (nature vs. nurture) and the vocabulary that labels students as having more or less intelligence. He also spoke directly to a primary challenge of teaching: how to effectively produce learning in classrooms populated by students needing vastly different amounts of time to learn the exact same content.[i] 

The source of that variation is largely irrelevant to the constraints placed on instructional decisions.  Teachers obviously have limited control over the denominator of the ratio (they must take kids as they are) and less than one might think over the numerator.  Teachers allot time to instruction only after educational authorities have decided the number of hours in the school day, the number of days in the school year, the number of minutes in class periods in middle and high schools, and the amount of time set aside for lunch, recess, passing periods, various pull-out programs, pep rallies, and the like.  There are also announcements over the PA system, stray dogs that may wander into the classroom, and other unscheduled encroachments on instructional time.

The model has had a profound influence on educational thought.  As of July 5, 2015, Google Scholar reported 2,931 citations of Carroll’s article.  Benjamin Bloom’s “mastery learning” was deeply influenced by Carroll.  It is predicated on the idea that optimal learning occurs when time spent on learning—rather than content—is allowed to vary, providing to each student the individual amount of time he or she needs to learn a common curriculum.  This is often referred to as “students working at their own pace,” and progress is measured by mastery of content rather than seat time. David C. Berliner’s 1990 discussion of time includes an analysis of mediating variables in the numerator of Carroll’s model, including the amount of time students are willing to spend on learning.  Carroll called this persistence, and Berliner links the construct to student engagement and time on task—topics of keen interest to researchers today.  Berliner notes that although both are typically described in terms of motivation, they can be measured empirically in increments of time.     

Most applications of Carroll’s model have been interested in what happens when insufficient time is provided for learning—in other words, when the numerator of the ratio is significantly less than the denominator.  When that happens, students don’t have an adequate opportunity to learn.  They need more time. 

As applied to Common Core and instruction, one should also be aware of problems that arise from the inefficient distribution of time.  Time is a limited resource that teachers deploy in the production of learning.  Below I discuss instances when the CCSS-M may lead to the numerator in Carroll’s model being significantly larger than the denominator—when teachers spend more time teaching a concept or skill than is necessary.  Because time is limited and fixed, wasted time on one topic will shorten the amount of time available to teach other topics.  Excessive instructional time may also negatively affect student engagement.  Students who have fully learned content that continues to be taught may become bored; they must endure instruction that they do not need.

Standard Algorithms and Alternative Strategies

Jason Zimba, one of the lead authors of the Common Core Math standards, and Barry Garelick, a critic of the standards, had a recent, interesting exchange about when standard algorithms are called for in the CCSS-M.  A standard algorithm is a series of steps designed to compute accurately and quickly.  In the U.S., students are typically taught the standard algorithms of addition, subtraction, multiplication, and division with whole numbers.  Most readers of this post will recognize the standard algorithm for addition.  It involves lining up two or more multi-digit numbers according to place-value, with one number written over the other, and adding the columns from right to left with “carrying” (or regrouping) as needed.

The standard algorithm is the only algorithm required for students to learn, although others are mentioned beginning with the first grade standards.  Curiously, though, CCSS-M doesn’t require students to know the standard algorithms for addition and subtraction until fourth grade.  This opens the door for a lot of wasted time.  Garelick questioned the wisdom of teaching several alternative strategies for addition.  He asked whether, under the Common Core, only the standard algorithm could be taught—or at least, could it be taught first. As he explains:

Delaying teaching of the standard algorithm until fourth grade and relying on place value “strategies” and drawings to add numbers is thought to provide students with the conceptual understanding of adding and subtracting multi-digit numbers. What happens, instead, is that the means to help learn, explain or memorize the procedure become a procedure unto itself and students are required to use inefficient cumbersome methods for two years. This is done in the belief that the alternative approaches confer understanding, so are superior to the standard algorithm. To teach the standard algorithm first would in reformers’ minds be rote learning. Reformers believe that by having students using strategies in lieu of the standard algorithm, students are still learning “skills” (albeit inefficient and confusing ones), and these skills support understanding of the standard algorithm. Students are left with a panoply of methods (praised as a good thing because students should have more than one way to solve problems), that confuse more than enlighten. 

 

Zimba responded that the standard algorithm could, indeed, be the only method taught because it meets a crucial test: reinforcing knowledge of place value and the properties of operations.  He goes on to say that other algorithms also may be taught that are consistent with the standards, but that the decision to do so is left in the hands of local educators and curriculum designers:

In short, the Common Core requires the standard algorithm; additional algorithms aren’t named, and they aren’t required…Standards can’t settle every disagreement—nor should they. As this discussion of just a single slice of the math curriculum illustrates, teachers and curriculum authors following the standards still may, and still must, make an enormous range of decisions.

 

Zimba defends delaying mastery of the standard algorithm until fourth grade, referring to it as a “culminating” standard that he would, if he were teaching, introduce in earlier grades.  Zimba illustrates the curricular progression he would employ in a table, showing that he would introduce the standard algorithm for addition late in first grade (with two-digit addends) and then extend the complexity of its use and provide practice towards fluency until reaching the culminating standard in fourth grade. Zimba would introduce the subtraction algorithm in second grade and similarly ramp up its complexity until fourth grade.

 

It is important to note that in CCSS-M the word “algorithm” appears for the first time (in plural form) in the third grade standards:

 

3.NBT.2  Fluently add and subtract within 1000 using strategies and algorithms based on place value, properties of operations, and/or the relationship between addition and subtraction.

 

The term “strategies and algorithms” is curious.  Zimba explains, “It is true that the word ‘algorithms’ here is plural, but that could be read as simply leaving more choice in the hands of the teacher about which algorithm(s) to teach—not as a requirement for each student to learn two or more general algorithms for each operation!” 

 

I have described before the “dog whistles” embedded in the Common Core, signals to educational progressives—in this case, math reformers—that  despite these being standards, the CCSS-M will allow them great latitude.  Using the plural “algorithms” in this third grade standard and not specifying the standard algorithm until fourth grade is a perfect example of such a dog whistle.

 

Why All the Fuss about Standard Algorithms?

It appears that the Common Core authors wanted to reach a political compromise on standard algorithms. 

 

Standard algorithms were a key point of contention in the “Math Wars” of the 1990s.   The 1997 California Framework for Mathematics required that students know the standard algorithms for all four operations—addition, subtraction, multiplication, and division—by the end of fourth grade.[ii]  The 2000 Massachusetts Mathematics Curriculum Framework called for learning the standard algorithms for addition and subtraction by the end of second grade and for multiplication and division by the end of fourth grade.  These two frameworks were heavily influenced by mathematicians (from Stanford in California and Harvard in Massachusetts) and quickly became favorites of math traditionalists.  In both states’ frameworks, the standard algorithm requirements were in direct opposition to the reform-oriented frameworks that preceded them—in which standard algorithms were barely mentioned and alternative algorithms or “strategies” were encouraged. 

 

Now that the CCSS-M has replaced these two frameworks, the requirement for knowing the standard algorithms in California and Massachusetts slips from third or fourth grade all the way to sixth grade.  That’s what reformers get in the compromise.  They are given a green light to continue teaching alternative algorithms, as long as the algorithms are consistent with teaching place value and properties of arithmetic.  But the standard algorithm is the only one students are required to learn.  And that exclusivity is intended to please the traditionalists.

 

I agree with Garelick that the compromise leads to problems.  In a 2013 Chalkboard post, I described a first grade math program in which parents were explicitly requested not to teach the standard algorithm for addition when helping their children at home.  The students were being taught how to represent addition with drawings that clustered objects into groups of ten.  The exercises were both time consuming and tedious.  When the parents met with the school principal to discuss the matter, the principal told them that the math program was following the Common Core by promoting deeper learning.  The parents withdrew their child from the school and enrolled him in private school.

 

The value of standard algorithms is that they are efficient and packed with mathematics.  Once students have mastered single-digit operations and the meaning of place value, the standard algorithms reveal to students that they can take procedures that they already know work well with one- and two-digit numbers, and by applying them over and over again, solve problems with large numbers.  Traditionalists and reformers have different goals.  Reformers believe exposure to several algorithms encourages flexible thinking and the ability to draw on multiple strategies for solving problems.  Traditionalists believe that a bigger problem than students learning too few algorithms is that too few students learn even one algorithm.

 

I have been a critic of the math reform movement since I taught in the 1980s.  But some of their complaints have merit.  All too often, instruction on standard algorithms has left out meaning.  As Karen C. Fuson and Sybilla Beckmann point out, “an unfortunate dichotomy” emerged in math instruction: teachers taught “strategies” that implied understanding and “algorithms” that implied procedural steps that were to be memorized.  Michael Battista’s research has provided many instances of students clinging to algorithms without understanding.  He gives an example of a student who has not quite mastered the standard algorithm for addition and makes numerous errors on a worksheet.  On one item, for example, the student forgets to carry and calculates that 19 + 6 = 15.  In a post-worksheet interview, the student counts 6 units from 19 and arrives at 25.  Despite the obvious discrepancy—(25 is not 15, the student agrees)—he declares that his answers on the worksheet must be correct because the algorithm he used “always works.”[iii] 

 

Math reformers rightfully argue that blind faith in procedure has no place in a thinking mathematical classroom. Who can disagree with that?  Students should be able to evaluate the validity of answers, regardless of the procedures used, and propose alternative solutions.  Standard algorithms are tools to help them do that, but students must be able to apply them, not in a robotic way, but with understanding.

 

Conclusion

Let’s return to Carroll’s model of time and learning.  I conclude by making two points—one about curriculum and instruction, the other about implementation.

In the study of numbers, a coherent K-12 math curriculum, similar to that of the previous California and Massachusetts frameworks, can be sketched in a few short sentences.  Addition with whole numbers (including the standard algorithm) is taught in first grade, subtraction in second grade, multiplication in third grade, and division in fourth grade.  Thus, the study of whole number arithmetic is completed by the end of fourth grade.  Grades five through seven focus on rational numbers (fractions, decimals, percentages), and grades eight through twelve study advanced mathematics.  Proficiency is sought along three dimensions:  1) fluency with calculations, 2) conceptual understanding, 3) ability to solve problems.

Placing the CCSS-M standard for knowing the standard algorithms of addition and subtraction in fourth grade delays this progression by two years.  Placing the standard for the division algorithm in sixth grade continues the two-year delay.   For many fourth graders, time spent working on addition and subtraction will be wasted time.  They already have a firm understanding of addition and subtraction.  The same thing for many sixth graders—time devoted to the division algorithm will be wasted time that should be devoted to the study of rational numbers.  The numerator in Carroll’s instructional time model will be greater than the denominator, indicating the inefficient allocation of time to instruction.

As Jason Zimba points out, not everyone agrees on when the standard algorithms should be taught, the alternative algorithms that should be taught, the manner in which any algorithm should be taught, or the amount of instructional time that should be spent on computational procedures.  Such decisions are made by local educators.  Variation in these decisions will introduce variation in the implementation of the math standards.  It is true that standards, any standards, cannot control implementation, especially the twists and turns in how they are interpreted by educators and brought to life in classroom instruction.  But in this case, the standards themselves are responsible for the myriad approaches, many unproductive, that we are sure to see as schools teach various algorithms under the Common Core.


[i] Tracking, ability grouping, differentiated learning, programmed learning, individualized instruction, and personalized learning (including today’s flipped classrooms) are all attempts to solve the challenge of student heterogeneity.  

[ii] An earlier version of this post incorrectly stated that the California framework required that students know the standard algorithms for all four operations by the end of third grade. I regret the error.

[iii] Michael T. Battista (2001).  “Research and Reform in Mathematics Education,” pp. 32-84 in The Great Curriculum Debate: How Should We Teach Reading and Math? (T. Loveless, ed., Brookings Instiution Press).

Authors

     
 
 




com

Has Common Core influenced instruction?


The release of 2015 NAEP scores showed national achievement stalling out or falling in reading and mathematics.  The poor results triggered speculation about the effect of Common Core State Standards (CCSS), the controversial set of standards adopted by more than 40 states since 2010.  Critics of Common Core tended to blame the standards for the disappointing scores.  Its defenders said it was too early to assess CCSS’s impact and that implementation would take many years to unfold. William J. Bushaw, executive director of the National assessment Governing Board, cited “curricular uncertainty” as the culprit.  Secretary of Education Arne Duncan argued that new standards typically experience an “implementation dip” in the early days of teachers actually trying to implement them in classrooms.

In the rush to argue whether CCSS has positively or negatively affected American education, these speculations are vague as to how the standards boosted or depressed learning.  They don’t provide a description of the mechanisms, the connective tissue, linking standards to learning.  Bushaw and Duncan come the closest, arguing that the newness of CCSS has created curriculum confusion, but the explanation falls flat for a couple of reasons.  Curriculum in the three states that adopted the standards, rescinded them, then adopted something else should be extremely confused.  But the 2013-2015 NAEP changes for Indiana, Oklahoma, and South Carolina were a little bit better than the national figures, not worse.[i]  In addition, surveys of math teachers conducted in the first year or two after the standards were adopted found that:  a) most teachers liked them, and b) most teachers said they were already teaching in a manner consistent with CCSS.[ii]  They didn’t mention uncertainty.  Recent polls, however, show those positive sentiments eroding. Mr. Bushaw might be mistaking disenchantment for uncertainty.[iii] 

For teachers, the novelty of CCSS should be dissipating.  Common Core’s advocates placed great faith in professional development to implement the standards.  Well, there’s been a lot of it.  Over the past few years, millions of teacher-hours have been devoted to CCSS training.  Whether all that activity had a lasting impact is questionable.  Randomized control trials have been conducted of two large-scale professional development programs.  Interestingly, although they pre-date CCSS, both programs attempted to promote the kind of “instructional shifts” championed by CCSS advocates. The studies found that if teacher behaviors change from such training—and that’s not a certainty—the changes fade after a year or two.  Indeed, that’s a pattern evident in many studies of educational change: a pop at the beginning, followed by fade out.  

My own work analyzing NAEP scores in 2011 and 2013 led me to conclude that the early implementation of CCSS was producing small, positive changes in NAEP.[iv]  I warned that those gains “may be as good as it gets” for CCSS.[v]  Advocates of the standards hope that CCSS will eventually produce long term positive effects as educators learn how to use them.  That’s a reasonable hypothesis.  But it should now be apparent that a counter-hypothesis has equal standing: any positive effect of adopting Common Core may have already occurred.  To be precise, the proposition is this: any effects from adopting new standards and attempting to change curriculum and instruction to conform to those standards occur early and are small in magnitude.   Policymakers still have a couple of arrows left in the implementation quiver, accountability being the most powerful.  Accountability systems have essentially been put on hold as NCLB sputtered to an end and new CCSS tests appeared on the scene.  So the CCSS story isn’t over.  Both hypotheses remain plausible. 

Reading Instruction in 4th and 8th Grades

Back to the mechanisms, the connective tissue binding standards to classrooms.  The 2015 Brown Center Report introduced one possible classroom effect that is showing up in NAEP data: the relative emphasis teachers place on fiction and nonfiction in reading instruction.  The ink was still drying on new Common Core textbooks when a heated debate broke out about CCSS’s recommendation that informational reading should receive greater attention in classrooms.[vi] 

Fiction has long dominated reading instruction.  That dominance appears to be waning.



After 2011, something seems to have happened.  I am more persuaded that Common Core influenced the recent shift towards nonfiction than I am that Common Core has significantly affected student achievement—for either good or ill.   But causality is difficult to confirm or to reject with NAEP data, and trustworthy efforts to do so require a more sophisticated analysis than presented here.

Four lessons from previous education reforms

Nevertheless, the figures above reinforce important lessons that have been learned from previous top-down reforms.  Let’s conclude with four:

1.  There seems to be evidence that CCSS is having an impact on the content of reading instruction, moving from the dominance of fiction over nonfiction to near parity in emphasis.  Unfortunately, as Mark Bauerlein and Sandra Stotsky have pointed out, there is scant evidence that such a shift improves children’s reading.[vii]

2.  Reading more nonfiction does not necessarily mean that students will be reading higher quality texts, even if the materials are aligned with CCSS.   The Core Knowledge Foundation and the Partnership for 21st Century Learning, both supporters of Common Core, have very different ideas on the texts schools should use with the CCSS.[viii] The two organizations advocate for curricula having almost nothing in common.

3.  When it comes to the study of implementing education reforms, analysts tend to focus on the formal channels of implementation and the standard tools of public administration—for example, intergovernmental hand-offs (federal to state to district to school), alignment of curriculum, assessment and other components of the reform, professional development, getting incentives right, and accountability mechanisms.  Analysts often ignore informal channels, and some of those avenues funnel directly into schools and classrooms.[ix]  Politics and the media are often overlooked.  Principals and teachers are aware of the politics swirling around K-12 school reform.  Many educators undoubtedly formed their own opinions on CCSS and the fiction vs. nonfiction debate before the standard managerial efforts touched them.

4.  Local educators whose jobs are related to curriculum almost certainly have ideas about what constitutes good curriculum.  It’s part of the profession.  Major top-down reforms such as CCSS provide local proponents with political cover to pursue curricular and instructional changes that may be politically unpopular in the local jurisdiction.  Anyone who believes nonfiction should have a more prominent role in the K-12 curriculum was handed a lever for promoting his or her beliefs by CCSS. I’ve previously called these the “dog whistles” of top-down curriculum reform, subtle signals that give local advocates license to promote unpopular positions on controversial issues.


[i] In the four subject-grade combinations assessed by NAEP (reading and math at 4th and 8th grades), IN, SC, and OK all exceeded national gains on at least three out of four tests from 2013-2015.  NAEP data can be analyzed using the NAEP Data Explorer: http://nces.ed.gov/nationsreportcard/naepdata/.

[ii] In a Michigan State survey of teachers conducted in 2011, 77 percent of teachers, after being presented with selected CCSS standards for their grade, thought they were the same as their state’s former standards.  http://education.msu.edu/epc/publications/documents/WP33ImplementingtheCommonCoreStandardsforMathematicsWhatWeknowaboutTeacherofMathematicsin41S.pdf

[iii] In the Education Next surveys, 76 percent of teachers supported Common Core in 2013 and 12 percent opposed.  In 2015, 40 percent supported and 50 percent opposed. http://educationnext.org/2015-ednext-poll-school-reform-opt-out-common-core-unions.

[iv] I used variation in state implementation of CCSS to assign the states to three groups and analyzed differences of the groups’ NAEP gains

[v] http://www.brookings.edu/~/media/research/files/reports/2015/03/bcr/2015-brown-center-report_final.pdf

[vi] http://www.edweek.org/ew/articles/2012/11/14/12cc-nonfiction.h32.html?qs=common+core+fiction

[vii] Mark Bauerlein and Sandra Stotsky (2012). “How Common Core’s ELA Standards Place College Readiness at Risk.” A Pioneer Institute White Paper.

[viii] Compare the P21 Common Core Toolkit (http://www.p21.org/our-work/resources/for-educators/1005-p21-common-core-toolkit) with Core Knowledge ELA Sequence (http://www.coreknowledge.org/ccss).  It is hard to believe that they are talking about the same standards in references to CCSS.

[ix] I elaborate on this point in Chapter 8, “The Fate of Reform,” in The Tracking Wars: State Reform Meets School Policy (Brookings Institution Press, 1999).


Authors

Image Source: © Patrick Fallon / Reuters
      
 
 




com

Reading and math in the Common Core era


      
 
 




com

Brookings Live: Reading and math in the Common Core era


Event Information

March 28, 2016
4:00 PM - 4:30 PM EDT

Online Only
Live Webcast

And more from the Brown Center Report on American Education


The Common Core State Standards have been adopted as the reading and math standards in more than forty states, but are the frontline implementers—teachers and principals—enacting them? As part of the 2016 Brown Center Report on American Education, Tom Loveless examines the degree to which CCSS recommendations have penetrated schools and classrooms. He specifically looks at the impact the standards have had on the emphasis of non-fiction vs. fiction texts in reading, and on enrollment in advanced courses in mathematics.

On March 28, the Brown Center hosted an online discussion of Loveless's findings, moderated by the Urban Institute's Matthew Chingos.  In addition to the Common Core, Loveless and Chingos also discussed the other sections of the three-part Brown Center Report, including a study of the relationship between ability group tracking in eighth grade and AP performance in high school.

Watch the archived video below.

Spreecast is the social video platform that connects people.
Check out Reading and Math in the Common Core Era on Spreecast.

      
 
 




com

Common Core’s major political challenges for the remainder of 2016


The 2016 Brown Center Report (BCR), which was published last week, presented a study of Common Core State Standards (CCSS).   In this post, I’d like to elaborate on a topic touched upon but deserving further attention: what to expect in Common Core’s immediate political future. I discuss four key challenges that CCSS will face between now and the end of the year.

Let’s set the stage for the discussion.  The BCR study produced two major findings.  First, several changes that CCSS promotes in curriculum and instruction appear to be taking place at the school level.  Second, states that adopted CCSS and have been implementing the standards have registered about the same gains and losses on NAEP as states that either adopted and rescinded CCSS or never adopted CCSS in the first place.  These are merely associations and cannot be interpreted as saying anything about CCSS’s causal impact.  Politically, that doesn’t really matter. The big story is that NAEP scores have been flat for six years, an unprecedented stagnation in national achievement that states have experienced regardless of their stance on CCSS.  Yes, it’s unfair, but CCSS is paying a political price for those disappointing NAEP scores.  No clear NAEP differences have emerged between CCSS adopters and non-adopters to reverse that political dynamic.

"Yes, it’s unfair, but CCSS is paying a political price for those disappointing NAEP scores. No clear NAEP differences have emerged between CCSS adopters and non-adopters to reverse that political dynamic."

TIMSS and PISA scores in November-December

NAEP has two separate test programs.  The scores released in 2015 were for the main NAEP, which began in 1990.  The long term trend (LTT) NAEP, a different test that was first given in 1969, has not been administered since 2012.  It was scheduled to be given in 2016, but was cancelled due to budgetary constraints.  It was next scheduled for 2020, but last fall officials cancelled that round of testing as well, meaning that the LTT NAEP won’t be given again until 2024.  

With the LTT NAEP on hold, only two international assessments will soon offer estimates of U.S. achievement that, like the two NAEP tests, are based on scientific sampling:  PISA and TIMSS.  Both tests were administered in 2015, and the new scores will be released around the Thanksgiving-Christmas period of 2016.  If PISA and TIMSS confirm the stagnant trend in U.S. achievement, expect CCSS to take another political hit.  America’s performance on international tests engenders a lot of hand wringing anyway, so the reaction to disappointing PISA or TIMSS scores may be even more pronounced than what the disappointing NAEP scores generated.

Is teacher support still declining?

Watch Education Next’s survey on Common Core (usually released in August/September) and pay close attention to teacher support for CCSS.  The trend line has been heading steadily south. In 2013, 76 percent of teachers said they supported CCSS and only 12 percent were opposed.  In 2014, teacher support fell to 43 percent and opposition grew to 37 percent.  In 2015, opponents outnumbered supporters for the first time, 50 percent to 37 percent.  Further erosion of teacher support will indicate that Common Core’s implementation is in trouble at the ground level.  Don’t forget: teachers are the final implementers of standards.

An effort by Common Core supporters to change NAEP

The 2015 NAEP math scores were disappointing.  Watch for an attempt by Common Core supporters to change the NAEP math tests. Michael Cohen, President of Achieve, a prominent pro-CCSS organization, released a statement about the 2015 NAEP scores that included the following: "The National Assessment Governing Board, which oversees NAEP, should carefully review its frameworks and assessments in order to ensure that NAEP is in step with the leadership of the states. It appears that there is a mismatch between NAEP and all states' math standards, no matter if they are common standards or not.” 

Reviewing and potentially revising the NAEP math framework is long overdue.  The last adoption was in 2004.  The argument for changing NAEP to place greater emphasis on number and operations, revisions that would bring NAEP into closer alignment with Common Core, also has merit.  I have a longstanding position on the NAEP math framework. In 2001, I urged the National Assessment Governing Board (NAGB) to reject the draft 2004 framework because it was weak on numbers and operations—and especially weak on assessing student proficiency with whole numbers, fractions, decimals, and percentages.  

Common Core’s math standards are right in line with my 2001 complaint.  Despite my sympathy for Common Core advocates’ position, a change in NAEP should not be made because of Common Core.  In that 2001 testimony, I urged NAGB to end the marriage of NAEP with the 1989 standards of the National Council of Teachers of Mathematics, the math reform document that had guided the main NAEP since its inception.  Reform movements come and go, I argued.  NAGB’s job is to keep NAEP rigorously neutral.  The assessment’s integrity depends upon it.  NAEP was originally intended to function as a measuring stick, not as a PR device for one reform or another.  If NAEP is changed it must be done very carefully and should be rooted in the mathematics children must learn.  The political consequences of it appearing that powerful groups in Washington, DC are changing “The Nation’s Report Card” in order for Common Core to look better will hurt both Common Core and NAEP.

Will Opt Out grow?

Watch the Opt Out movement.  In 2015, several organized groups of parents refused to allow their children to take Common Core tests.  In New York state alone, about 60,000 opted out in 2014, skyrocketing to 200,000 in 2015.  Common Core testing for 2016 begins now and goes through May.  It will be important to see whether Opt Out can expand to other states, grow in numbers, and branch out beyond middle- and upper-income neighborhoods.

Conclusion

Common Core is now several years into implementation.  Supporters have had a difficult time persuading skeptics that any positive results have occurred. The best evidence has been mixed on that question.  CCSS advocates say it is too early to tell, and we’ll just have to wait to see the benefits.  That defense won’t work much longer.  Time is running out.  The political challenges that Common Core faces the remainder of this year may determine whether it survives.

Authors

Image Source: Jim Young / Reuters
      
 
 




com

Eurozone desperately needs a fiscal transfer mechanism to soften the effects of competitiveness imbalances


The eurozone has three problems: national debt obligations that cannot be met, medium-term imbalances in trade competitiveness, and long-term structural flaws.

The short-run problem requires more of the monetary easing that Germany has, with appalling shortsightedness, been resisting, and less of the near-term fiscal restraint that Germany has, with equally appalling shortsightedness, been seeking. To insist that Greece meet all of its near-term current debt service obligations makes about as much sense as did French and British insistence that Germany honor its reparations obligations after World War I. The latter could not be and were not honored. The former cannot and will not be honored either.

The medium-term problem is that, given a single currency, labor costs are too high in Greece and too low in Germany and some other northern European countries. Because adjustments in currency values cannot correct these imbalances, differences in growth of wages must do the job—either wage deflation and continued depression in Greece and other peripheral countries, wage inflation in Germany, or both. The former is a recipe for intense and sustained misery. The latter, however politically improbable it may now seem, is the better alternative.

The long-term problem is that the eurozone lacks the fiscal transfer mechanisms necessary to soften the effects of competitiveness imbalances while other forms of adjustment take effect. This lack places extraordinary demands on the willingness of individual nations to undertake internal policies to reduce such imbalances. Until such fiscal transfer mechanisms are created, crises such as the current one are bound to recur.

Present circumstances call for a combination of short-term expansionary policies that have to be led or accepted by the surplus nations, notably Germany, who will also have to recognize and accept that not all Greek debts will be paid or that debt service payments will not be made on time and at originally negotiated interest rates. The price for those concessions will be a current and credible commitment eventually to restore and maintain fiscal balance by the peripheral countries, notably Greece.


Authors

Publication: The International Economy
Image Source: © Vincent Kessler / Reuters
     
 
 




com

King v. Burwell: Chalk one up for common sense


The Supreme Court today decided that Congress meant what it said when it enacted the Affordable Care Act (ACA). The ACA requires people in all 50 states to carry health insurance and provided tax credits to help them afford it. To have offered such credits only in the dozen states that set up their own exchanges would have been cruel and unsustainable because premiums for many people would have been unaffordable.

But the law said that such credits could be paid in exchanges ‘established by a state,’ which led some to claim that the credits could not be paid to people enrolled by the federally operated exchange. In his opinion, Chief Justice Roberts euphemistically calls that wording ‘inartful.’ Six Supreme Court justices decided that, read in its entirety, the law provides tax credits in every state, whether the state manages the exchange itself or lets the federal government do it for them.

That decision is unsurprising. More surprising is that the Court agreed to hear the case. When it did so, cases on the same issue were making their ways through four federal circuits. In only one of the four circuits was there a standing decision, and it found that tax credits were available everywhere. It is customary for the Supreme Court to wait to take a case until action in lower courts is complete or two circuits have disagreed. In this situation, the justices, eyeing the electoral calendar, may have preferred to hear the case sooner rather than later to avoid confronting it in the middle of a presidential election.

Whatever the Court’s motives for taking the case, their willingness to hear the case caused supporters of the Affordable Care Act enormous unease. Were the more conservative members of the Court poised to accept an interpretation of the law that ACA supporters found ridiculous but that inartful legislative drafting gave the gloss of plausibility? Judicial demeanor at oral argument was not comforting. A 5-4 decision disallowing payment of tax credits seemed ominously plausible.

Future Challenges for the ACA

The Court’s 6-3 decision ended those fears. The existential threat to health reform from litigation is over. But efforts to undo the Affordable Care Act are not at an end. They will continue in the political sphere. And that is where they should be. ACA opponents know that there is little chance for them to roll back the Affordable Care Act in any fundamental way as long as a Democrat is in the White House. To dismantle the law, they must win the presidency in 2016.

But winning the presidency will not be enough. It would be mid 2017 before ACA opponents could draft and enact legislation to curb the Affordable Care Act and months more before it could take effect. To borrow a metaphor from the military, even if those opposed to the ACA win the presidency, they will have to deal with ‘facts on the ground.’

Well over 30 million Americans will be receiving health insurance under the Affordable Care Act. That will include people who can afford health insurance because of the tax credits the Supreme Court affirmed today. It will include millions more insured through Medicaid in the steadily growing number of states that have agreed to extend Medicaid coverage. It will include the young adult children covered under parental plans because the ACA requires this option.

Insurance companies will have millions more customers because of the ACA. Hospitals will fill more beds because previously uninsured people will be able to afford care and will have fewer unpaid bills generated by people who were uninsured but the hospitals had to admit under previous law. Drug companies and device manufacturers will be enjoying increased sales because of the ACA.

The elderly will have better drug coverage because the ACA has eliminated the notorious ‘donut hole’—the drug expenditures that Medicare previously did not cover.

Those facts will discourage any frontal assault on the ACA, particularly if the rate of increase of health spending remains as well controlled as it has been for the past seven years.

Of course, differences between supporters and opponents of the ACA will not vanish. But those differences will not preclude constructive legislation. Beginning in 2017, the ACA gives states, an opening to propose alternative ways of achieving the goals of the Affordable Care Act, alone on in groups, by alternative means. The law authorizes the president to approve such waivers if they serve the goals of the law. The United States is large and diverse. Use of this authority may help diffuse the bitter acrimony surrounding Obamacare, as my colleague, Stuart Butler, has suggested. At the same time, Obamacare supporters have their own list of changes that they believe would improve the law. At the top of the list is fixing the ‘family glitch,’ a drafting error that unintentionally deprives many families of access to the insurance exchanges and to tax credits that would make insurance affordable.

As Chief Justice Roberts wrote near the end of his opinion of the Court, “In a democracy, the power to make the law rests with those chosen by the people....Congress passed the Affordable Care Act to improve health insurance markets, not to destroy them.” The Supreme Court decision assuring that tax credits are available in all states spares the nation chaos and turmoil. It returns the debate about health care policy to the political arena where it belongs. In so doing, it brings a bit closer the time when the two parties may find it in their interest to sit down and deal with the twin realities of the Affordable Care Act: it is imperfect legislation that needs fixing, and it is decidedly here to stay.

Authors

Image Source: © Jim Tanner / Reuters
     
 
 




com

Will left vs. right become a fight over ethnic politics?

The first night of the Democratic National Convention was a rousing success, with first lady Michelle Obama and progressive icon Sen. Elizabeth Warren offering one of the most impressive succession of speeches I can remember seeing. It was inspiring and, moreover, reassuring to see a Muslim – Congressman Keith Ellison – speaking to tens of […]

      
 
 




com

End of life planning: An idea whose time has come?


Far too many people reach their advanced years without planning for how they want their lives to end. The result too often is needless suffering, reduced dignity and autonomy, and agonizing decisions for family members.

Addressing these end-of-life issues is difficult. Most of us don’t want to confront them for ourselves or our family members. And until recently, many people resisted the idea of reimbursing doctors for end-of-life counselling sessions. In 2009, Sarah Palin labelled such sessions as the first step in establishing “death panels.” Although no such thing was contemplated when Representative Earl Blumenauer (D- Oregon) proposed such reimbursement, the majority of the public believed that death panels and euthanasia were just around the corner. Even the Obama Administration subsequently backed away from efforts to allow such reimbursement.

Fortunately, this is now history. In the past year or two the tenor of the debate has shifted toward greater acceptance of the need to deal openly with these issues. At least three developments illustrate the shift.

First, talk of “death panels” has receded, and new regulations, approved in late 2015 to take effect in January of this year, now allow Medicare reimbursement for end of life counselling. The comment period leading up to this decision was, according to most accounts, relatively free of the divisive rhetoric characterizing earlier debates. Both the American Medical Association and the American Hospital Association have signaled their support.

Second, physicians are increasingly recognizing that the objective of extending life must be balanced against the expressed priorities of their patients which often include the quality and not just the length of remaining life. Atal Gwande’s best-selling book, Being Mortal, beautifully illustrates the challenges for both doctors and patients. With well-grounded and persuasive logic, Gwande speaks of the need to de-medicalize death and dying.

The third development is perhaps the most surprising. It is a bold proposal advanced by Governor Jeb Bush before he bowed out of the Presidential race, suggesting that eligibility for Medicare be conditioned on having an advanced directive. His interest in these issues goes back to the time when as governor of Florida he became embroiled in a dispute about the removal of a feeding tube from a comatose Terry Schiavo. Ms. Schiavo’s husband and parents were at odds about what to do, her husband favoring removal and her parents wishing to sustain life. In the end, although the Governor sided with the parents, the courts decided in favor of the husband and allowed her to die. If an advanced directive had existed, the family disagreement along with a long and contentious court battle could have been avoided.

The point of such directives is not to pressure people into choosing one option over another but simply to insure that they consider their own preferences while they are still able. Making this a requirement for receipt of Medicare would almost surely encourage more people to think seriously about the type of care they would like toward the end of life and to talk with both their doctors and their family about these views. However, for many others, it would be a step too far and might reverse the new openness to advanced planning. A softer version nudging Medicare applicants to address these issues might be more acceptable. They would be asked to review several advance directive protocols, to choose one (or substitute their own). If they felt strongly that such planning was inappropriate, they could opt out of the process entirely and still receive their benefits.

Advanced care planning should not be linked only to Medicare. We should encourage people to make these decisions earlier in their lives and provide opportunities for them to revisit their initial decisions. This could be accomplished by implementing a similar nudge-like process for Medicaid recipients and those covered by private insurance.

Right now too few people are well informed about their end-of-life options, have talked to their doctors or their family members, or have created the necessary documents. Only about half of all of those who have reached the age of 60 have an advanced directive such as a living will or a power of attorney specifying their wishes. Individual preferences will naturally vary. Some will want every possible treatment to forestall death even if it comes with some suffering and only a small hope of recovery; others will want to avoid this by being allowed to die sooner or in greater comfort. Research suggests that when given a choice, most people will choose comfort care over extended life.

In the absence of advance planning, the choice of how one dies is often left to doctors, hospitals, and relatives whose wishes may or may not represent the preferences of the individual in their care. For example, most people would prefer to die at home but the majority do not. Physicians are committed to saving lives and relatives often feel guilty about letting a loved one “go.”

The costs of prolonging life when there is little point in doing so can be high. The average Medicare patient in their last year of life costs the government $33,000 with spending in that final year accounting for 25 percent of all Medicare spending. Granted no one knows in advance which year is “their last” so these data exaggerate the savings that better advance planning might yield, but even if it is 10% that represents over $50 billion a year. Dr. Ezekiel Emanuel, an expert in this area, notes that hospice care can reduce costs by 10 to 20 percent for cancer patients but warns that little or no savings have accompanied palliative care for heart failure or emphysema patients, for example. This could reflect the late use of palliative care in such cases or the fact that palliative care is more expensive than assumed.

In the end, Dr. Emanuel concludes, and I heartily agree, that a call for better advance planning should not be based primarily on its potential cost savings but rather on the respect it affords the individual to die in dignity and in accordance with their own preferences.


Editor's note: This piece originally appeared in Inside Sources.

Publication: Inside Sources
     
 
 




com

To help low-income American households, we have to close the "work gap"


When Franklin Roosevelt delivered his second inaugural address on January 20, 1936 he lamented the “one-third of a nation ill-housed, ill-clad, ill-nourished.” He challenged Americans to measure their collective progress not by “whether we add more to the abundance of those who have much; [but rather] whether we provide enough for those who have too little.” In our new paper, One third of a nation: Strategies for helping working families, we ask a simple question: How are we doing?

In brief, we find that:

  • The gulf in labor market income between the haves and have-nots remains wide. The median income of households in the bottom third in 2014 was $24,000, just a little more than a quarter of the median of $90,000 for the top two-thirds.
  • The bottom-third households are disproportionately made up of minority adults, adults with limited educational attainment, and single parents.  
  • The most important reason for the low incomes of the bottom third is a “work gap”: the fact that many are not employed at all, or work limited hours. 

The work gap

The decline in labor force participation rates has been widely documented, but the growing gulf in the work gap between the bottom third and the rest of the population is truly striking:

While the share of men who are employed in the top two-thirds has been quite stable since 1980, lower-income men’s work rates have declined by 11 percentage points. What about women?

Middle- and upper-income women have increased their work rates by 13 percentage points. This has helped maintain or even increase their family’s income. But employment rates among lower-income women have been flat, despite reforms of the welfare system and safety net designed to encourage work.

Why the lack of paid work for the bottom third?

Many on the left point to problems like low pay and lack of access to affordable childcare, and so favor a higher minimum wage and more subsidies for daycare. For many conservatives, the problem is rooted in family breakdown and a dependency-inducing safety net. They therefore champion proposals like marriage promotion programs and strict work requirements for public benefits. Most agree about the importance of education.

We model the impact of a range of such proposals, using data from the Census Bureau, specifically: higher graduation rates from high school, a tighter labor market, a higher minimum wage, and “virtual” marriages between single mothers and unattached men. In isolation, each has only modest effects. In our model, the only significant boost to income comes from employment, and in particular from assuming that all bottom-third household heads work full time:

Time to debate some more radical solutions 

It may be that the standard solutions to the problems of the bottom third, while helpful, are no longer sufficient. A debate about whether to make safety net programs such as Food Stamps and housing assistance conditional on work or training is underway. So are other solutions such as subsidized jobs (created by some states during the Great Recession as a natural complement to a work-conditioned safety net), more work sharing (used in Germany during the recession), or even a universal basic income (being considered by Swiss voters in June).

Authors

Image Source: © Stephen Lam / Reuters
      
 
 




com

Money for nothing: Why a universal basic income is a step too far


The idea of a universal basic income (UBI) is certainly an intriguing one, and has been gaining traction. Swiss voters just turned it down. But it is still alive in Finland, in the Netherlands, in Alaska, in Oakland, CA, and in parts of Canada. 

Advocates of a UBI include Charles Murray on the right and Anthony Atkinson on the left. This surprising alliance alone makes it interesting, and it is a reasonable response to a growing pool of Americans made jobless by the march of technology and a safety net that is overly complex and bureaucratic. A comprehensive and excellent analysis in The Economist points out that while fears about technological unemployment have previously proved misleading, “the past is not always a good guide to the future.”

Hurting the poor

Robert Greenstein argues, however, that a UBI would actually hurt the poor by reallocating support up the income scale. His logic is inescapable: either we have to spend additional trillions providing income grants to all Americans or we have to limit assistance to those who need it most. 

One option is to provide unconditional payments along the lines of a UBI, but to phase it out as income rises. Libertarians like this approach since it gets rid of bureaucracies and leaves the poor free to spend the money on whatever they choose, rather than providing specific funds for particular needs. Liberals fear that such unconditional assistance would be unpopular and would be an easy target for elimination in the face of budget pressures. Right now most of our social programs are conditional. With the exception of the aged and the disabled, assistance is tied to work or to the consumption of necessities such as food, housing, or medical care, and our two largest means-tested programs are Food Stamps and the Earned Income Tax Credit.

The case for paternalism

Liberals have been less willing to openly acknowledge that a little paternalism in social policy may not be such a bad thing. In fact, progressives and libertarians alike are loath to admit that many of the poor and jobless are lacking more than just cash. They may be addicted to drugs or alcohol, suffer from mental health issues, have criminal records, or have difficulty functioning in a complex society. Money may be needed but money by itself does not cure such ills. 

A humane and wealthy society should provide the disadvantaged with adequate services and support. But there is nothing wrong with making assistance conditional on individuals fulfilling some obligation whether it is work, training, getting treatment, or living in a supportive but supervised environment.

In the end, the biggest problem with a universal basic income may not be its costs or its distributive implications, but the flawed assumption that money cures all ills.  

Image Source: © Tom Polansek / Reuters
      
 
 




com

On North Korea, press for complete denuclearization, but have a plan B

The goal President Trump will try to advance in Vietnam – the complete denuclearization of North Korea – is a goal genuinely shared by the ROK, China, Japan, Russia, and many other countries. For the ROK, it would remove a major asymmetry with its northern neighbor and a barrier to North-South reconciliation. For China, it…

       




com

Is NYC’s Bold Transportation Commissioner a Victim of Her Own Success?

The New York Times’ profile of celebrated and embattled New York City Transportation Commissioner, Janette Sadik-Khan, shows how getting things done in a democracy can be bad for your political future. Sadik-Khan has increased the amount of bike lanes by over 60 percent, removed cars from congested places like Herald and Times squares enabling them…

       




com

Taxing capital income: Mark-to-market and other approaches

Given increased income and wealth inequality, much recent attention has been devoted to proposals to increase taxes on the wealthy (such as imposing a tax on accumulated wealth). Since capital income is highly skewed toward the ultra-wealthy, methods of increasing taxes on capital income provide alternative approaches for addressing inequality through the tax system. Marking…

       




com

How a VAT could tax the rich and pay for universal basic income

The Congressional Budget Office just projected a series of $1 trillion budget deficits—as far as the eye can see. Narrowing that deficit will require not only spending reductions and economic growth but also new taxes. One solution that I’ve laid out in a new Hamilton Project paper, "Raising Revenue with a Progressive Value-Added Tax,” is…

       




com

Webinar: Reopening and revitalization in Asia – Recommendations from cities and sectors

As COVID-19 continues to spread through communities around the world, Asian countries that had been on the front lines of combatting the virus have also been the first to navigate the reviving of their societies and economies. Cities and economic sectors have confronted similar challenges with varying levels of success. What best practices have been…

       




com

A preview of President Obama's upcoming trip to Cuba and Argentina


In advance of President Obama’s historic trip to Cuba and Argentina, three Brookings scholars participated in a media roundtable to offer context and outline their expectations for the outcomes of the trip. Richard Feinberg and Ted Piccone discussed Cuba–including developments in the U.S.-Cuba relationship, the Cuban economy, and human rights on the island–and Harold Trinkunas offered insight on Argentina, inter-American relations, and the timing of the visit.

Read the transcript (PDF) »

Richard Feinberg:

The idea is to promote a gradual incremental transition to a more open, pluralistic and prosperous Cuba integrated into global markets of goods, capital, and ideas. It is a long-term strategy. It cannot be measured by quarterly reports.

Ted Piccone:

...the key [is] to unlock a whole set of future changes that I think will be net positive for the United States, but it is going to take time, and it is not going to happen overnight.

Harold Trinkunas:

Cuba is really about moving, among other things, a stumbling block to better relations with Latin America, and Argentina is about restoring a positive relationship with a key swing state in the region that was once one of our most important allies in the region.

Downloads

Image Source: © Alexandre Meneghini / Reuters
      
 
 




com

African Union Commission elections and prospects for the future


The African Union (AU) will hold its 27th Heads of State Assembly in Kigali from July 17-18, 2016, as part of its ongoing annual meetings, during which time it will elect individuals to lead the AU Commission for the next four years. Given the fierce battle for the chairperson position in 2012; and  as the AU has increasingly been called upon to assume more responsibility for various issues that affect the continent—from the Ebola pandemic that ravaged West Africa in 2013-14 to civil wars in several countries, including Libya, Central African Republic, and South Sudan, both the AU Commission and its leadership have become very important and extremely prestigious actors. The upcoming elections are not symbolic: They are about choosing trusted and competent leaders to guide the continent in good times and bad.

Structure of the African Union

The African Union (AU) [1] came into being on July 9, 2002 and was established to replace the Organization of African Unity (OAU). The AU’s highest decisionmaking body is the Assembly of the African Union, which consists of all the heads of state and government of the member states of the AU. The chairperson of the assembly is the ceremonial head of the AU and is elected by the Assembly of Heads of State to serve a one-year term. This assembly is currently chaired by President Idriss Déby of Chad.

The AU’s secretariat is called the African Union Commission [2] and is based in Addis Ababa. The chairperson of the AU Commission is the chief executive officer, the AU’s legal representative, and the accounting officer of the commission. The chairperson is directly responsible to the AU’s Executive Council. The current chairperson of the AU Commission is Dr. Nkosazana Dlamini Zuma of South Africa and is assisted by a deputy chairperson, who currently is Erastus Mwencha of Kenya.

The likely nominees for chairperson

Dr. Zuma has decided not to seek a second term in office and, hence, this position is open for contest. The position of deputy chairperson will also become vacant, since Mwencha is not eligible to serve in the new commission.

Notably, the position of chairperson of the AU Commission does not only bring prestige and continental recognition to the person that is elected to serve but also to the country and region from which that person hails. Already, the Southern African Development Community (SADC), Dr. Zuma’s region, is arguing that it is entitled to another term since she has decided not to stand for a second. Other regions, such as eastern and central Africa, have already identified their nominees. It is also rumored that some regions have already initiated diplomatic efforts to gather votes for their preferred candidates.

In April 2016, SADC chose Botswana’s minister of foreign affairs, Dr. Pelonomi Venson-Moitoi, as its preferred candidate. Nevertheless, experts believe that even if South Africa flexes its muscles to support Venson-Moitoi’s candidacy (which it is most likely to do), it is not likely to succeed this time because Botswana has not always supported the AU on critical issues, such as the International Criminal Court, and hence, does not have the goodwill necessary to garner the support for its candidate among the various heads of state.

Venson-Moitoi is expected to face two other candidates—Dr. Specioza Naigaga Wandira Kazibwe of Uganda (representing east Africa) and Agapito Mba Mokuy of Equatorial Guinea (representing central Africa). Although Mokuy is relatively unknown, his candidacy could be buoyed by the argument that a Spanish-speaking national has never held the chairperson position, as well as the fact that, despite its relatively small size, Equatorial Guinea—and its president, Teodoro Obiang Nguema—has given significant assistance to the AU over the years. Obiang Nguema’s many financial and in-kind contributions to the AU could endear his country and its candidate to the other members of the AU.

In fact, during his long tenure as president of Equatorial Guinea, Obiang Nguema has shown significant interest in the AU, has attended all assemblies, and has made major contributions to the organization. In addition to the fact that Equatorial Guinea hosted AU summits in 2011 and 2014, Obiang Nguema served as AU chairperson in 2011. Thus, a Mokuy candidacy for the chairperson of the AU Commission could find favor among those who believe it would give voice to small and often marginalized countries, as well as members of the continent’s Spanish-speaking community. Finally, the opinion held by South Africa, one of the continent’s most important and influential countries, on several issues (from the political situation in Burundi to the International Criminal Court and its relations with Africa) appears closer to that of Equatorial Guinea’s than Botswana’s.

Of course, both Venson-Moitoi and Kazibwe are seasoned civil servants with international and administrative experience and have the potential to function as an effective chairperson. However, the need to give voice within the AU to the continent’s historically marginalized regions could push Mokuy’s candidacy to the top.

Nevertheless, supporters of a Mokuy candidacy may be worried that accusations of corruption and repression labeled on Equatorial Guinea by the international community could negatively affect how their candidate is perceived by voters.

Also important to voters is their relationship with former colonial powers. In fact, during the last election, one argument that helped defeat then-Chairperson Jean Ping was that both he and his (Gabonese) government were too pro-France. This issue may not be a factor in the 2016 elections, though: Equatorial Guinea, Uganda, and Botswana are not considered to be extremely close to their former colonizers.

Finally, gender and regional representation should be important considerations for the voters who will be called upon to choose a chairperson for the AU Commission. Both Venson-Moitoi and Kazibwe are women, and the election of either of them would continue to support diversity within African leadership. Then again, Mr. Mokuy’s election would enhance regional and small-state representation.

The fight to be commissioner of peace and security

Also open for contest are the portfolios of Peace and Security, Political Affairs, Infrastructure and Energy, Rural Economy and Agriculture, Human Resources, and Science and Technology. Many countries are vying for these positions on the commission in an effort to ensure that their status within the AU is not marginalized. For example, Nigeria and Algeria, both of which are major regional leaders, are competing to capture the position of commissioner of Peace and Security. Algeria is keen to keep this position: It has held this post over the last decade, and, if it loses this position, it would not have any representation on the next commission—significantly diminishing the country’s influence in the AU.

Nigeria’s decision to contest the position of commissioner of Peace and Security is based on the decision by the administration of President Muhammadu Buhari to give up the leadership of Political Affairs. Historically, Nigeria has been unwilling to compete openly against regional powers for leadership positions in the continent’s peace and security area. Buhari’s decision to contest the portfolio of Peace and Security is very risky, since a loss to Algeria and the other contesting countries will leave Nigeria without a position on the commission and would be quite humiliating to the president and his administration.

Struggling to maintain a regional, gender, and background balance

Since the AU came into being in 2002, there has been an unwritten rule that regional powers (e.g., Algeria, Kenya, Nigeria, South Africa) should not lead or occupy key positions in the AU’s major institutions. Thus, when Dr. Zuma was elected in 2012, South Africa was severely criticized, especially by some smaller African countries, for breaking that rule. The hope, especially of the non-regional leaders, is that the 2016 election will represent a return to the status quo ante since most of the candidates for the chairperson position hail from small- and medium-sized countries.

While professional skills and international experience are critical for an individual to serve on the commission, the AU is quite concerned about the geographical distribution of leadership positions, as well as the representation of women on the commission, as noted above. In fact, the commission’s statutes mandate that each region present two candidates (one female and the other male) for every portfolio. Article 6(3) of the commission’s statutes states that “[a]t least one Commissioner from each region shall be a woman.” Unfortunately, women currently make up only a very small proportion of those contesting positions in the next commission. Thus, participants must keep in mind the need to create a commission that reflects the continent’s diversity, especially in terms of gender and geography.

Individuals that have served in government and/or worked for an international organization dominate leadership positions in the commission. Unfortunately, individuals representing civil society organizations are poorly represented on the nominee lists; unsurprisingly, given the fact that the selection process is controlled by civil servants from states and regional organizations. Although this approach to the staffing of the commission guarantees the selection of skilled and experienced administrators, it could burden the commission with the types of bureaucratic problems that are common throughout the civil services of the African countries, notably, rigidity, tunnel vision, and the inability, or unwillingness to undertake bold and progressive initiatives.

No matter who wins, the African Union faces an uphill battle

The AU currently faces many challenges, some of which require urgent and immediate action and others, which can only be resolved through long-term planning. For example, the fight against terrorism and violent extremism, and securing the peace in South Sudan, Burundi, Libya, and other states and regions consumed by violent ethno-cultural conflict require urgent and immediate action from the AU. Issues requiring long-term planning by the AU include helping African countries improve their governance systems, strengthening the African Court of Justice and Human Rights, facilitating economic integration, effectively addressing issues of extreme poverty and inequality in the distribution of income and wealth, responding effectively and fully to pandemics, and working towards the equitable allocation of water, especially in urban areas.

Finally, there is the AU’s dependence on foreign aid for its financing. When Dr. Dlamini Zuma took over as chairperson of the AU Commission in 2012, she was quite surprised by the extent to which the AU depends on budget subventions from international donors and feared that such dependence could interfere with the organization’s operations. The AU budget for 2016 is $416,867,326, of which $169,833,340 (40 percent) is assessed on Member States and $247,033,986 (59 percent) is to be secured from international partners.  The main foreign donors are the United States, Canada, China, and the European Union.

Within Africa, South Africa, Angola, Nigeria, and Algeria are the best paying rich countries. Other relatively rich countries, Egypt, Libya, Sudan, and Cameroon, are struggling to pay. Libya’s civil war and its inability to form a permanent government is interfering with its ability to meet its financial obligations, even to its citizens. Nevertheless, it is hoped that South Africa, Nigeria, Angola, Egypt, and Libya, the continent’s richest countries, are expected to eventually meet as much as 60% of the AU’s budget and help reduce the organization’s continued dependence on international donors. While these major continental and international donors are not expected to have significant influence on the elections for leadership positions on the AU Commission, they are likely to remain a determining factor on the types of programs that the AU can undertake.

Dealing fully and effectively with the multifarious issues that plague the continent requires AU Commission leadership that is not only well-educated and skilled, but that has the foresight to help the continent develop into an effective competitor in the global market and a full participant in international affairs. In addition to helping the continent secure the peace and provide the enabling environment for economic growth and the creation of wealth, this crop of leaders should provide the continent with the leadership necessary to help states develop and adopt institutional arrangements and governing systems that guarantee the rule of law, promote the protection of human rights, and advance inclusive economic growth and development.


[1] The AU consists of all the countries on the continent and in the United Nations, except the Kingdom of Morocco, which left the AU after the latter recognized the Sahrawi Arab Democratic Republic (Western Sahara). Morocco claims that the Western Sahara is part of its territory.

[2] The AU Commission is made up of a number of commissioners who deal with various policy areas, including peace and security, political affairs, infrastructure and energy, social affairs, trade and industry, rural economy and agriculture, human resources, science and technology, and economic affairs. According to Article 3 of its Statutes, the Commission is empowered to “represent the Union and defend its interests under the guidance of and as mandated by the Assembly and Executive Council.”