pro

Helping close divisions in the US: Insights from the American Well-Being Project

Issues of despair in the United States are diverse, widespread, and politically fueled, ranging from concentrated poverty and crime in cities to the opioid crisis plaguing poor rural towns. Local leaders and actors in disconnected communities need public policy resources and inputs beyond what has traditionally been available. Scholars at Brookings and Washington University in…

       




pro

Do social protection programs improve life satisfaction? Lessons from Iraq

There is much debate now—in both developed and developing economies—on the merits or de-merits of universal basic income (UBI), with strong opinions on either side. Advocates clash with those who see targeted transfers to the poor—such as the conditional cash transfers first pioneered in Latin America—as better at providing incentives for long-term investments in health,…

       




pro

Progress paradoxes in China, India, and the US: A tale of growing but unhappy countries

What we know depends on what we measure. Traditional income-based metrics, such as GDP and poverty headcounts, tell a story of unprecedented economic development, as seen by improvements in longevity, health, and literacy. Yet, well-being metrics, which are based on large-scale surveys of individuals around the world and assess their daily moods, satisfaction with life,…

       




pro

Do social protection programs improve life satisfaction?

An extensive literature examines the link between social protection-related public spending and objective outcomes of well-being such as income, employment, education, and health (see Department for International Development [DFID], 2011; ILO, 2010; World Bank, 2012). Much less attention has been given to how government social protection policies influence individuals’ own sense of well-being, particularly in…

       




pro

Progress paradoxes and sustainable growth

The past century is full of progress paradoxes, with unprecedented economic development, as evidenced by improvements in longevity, health, and literacy. At the same time, we face daunting challenges such as climate change, persistent poverty in poor and fragile states, and increasing income inequality and unhappiness in many of the richest countries. Remarkably, some of…

       




pro

Why Bridgegate proves we need fewer hacks, machines, and back room deals, not more


I had been mulling a rebuttal to my colleague and friend Jon Rauch’s interesting—but wrong—new Brookings paper praising the role of “hacks, machines, big money, and back room deals” in democracy. I thought the indictments of Chris Christie’s associates last week provided a perfect example of the dangers of all of that, and so of why Jon was incorrect. But in yesterday’s L.A. Times, he beat me to it, himself defending the political morality (if not the efficacy) of their actions, and in the process delivering a knockout blow to his own position.

Bridgegate is a perfect example of why we need fewer "hacks, machines, big money, and back room deals" in our politics, not more. There is no justification whatsoever for government officials abusing their powers, stopping emergency vehicles and risking lives, making kids late for school and parents late for their jobs to retaliate against a mayor who withholds an election endorsement. We vote in our democracy to make government work, not break. We expect that officials will serve the public, not their personal interests. This conduct weakens our democracy, not strengthens it.

It is also incorrect that, as Jon suggests, reformers and transparency advocates are, in part, to blame for the gridlock that sometimes afflicts our American government at every level. As my co-authors and I demonstrated at some length in our recent Brookings paper, “Why Critics of Transparency Are Wrong,” and in our follow-up Op-Ed in the Washington Post, reform and transparency efforts are no more responsible for the current dysfunction in our democracy than they were for the gridlock in Fort Lee. Indeed, in both cases, “hacks, machines, big money, and back room deals” are a major cause of the dysfunction. The vicious cycle of special interests, campaign contributions and secrecy too often freeze our system into stasis, both on a grand scale, when special interests block needed legislation, and on a petty scale, as in Fort Lee. The power of megadonors has, for example, made dysfunction within the House Republican Caucus worse, not better.

Others will undoubtedly address Jon’s new paper at length. But one other point is worth noting now. As in foreign policy discussions, I don’t think Jon’s position merits the mantle of political “realism,” as if those who want democracy to be more democratic and less corrupt are fluffy-headed dreamers. It is the reformers who are the true realists. My co-authors and I in our paper stressed the importance of striking realistic, hard-headed balances, e.g. in discussing our non-absolutist approach to transparency; alas, Jon gives that the back of his hand, acknowledging our approach but discarding the substance to criticize our rhetoric as “radiat[ing] uncompromising moralism.” As Bridgegate shows, the reform movement’s “moralism" correctly recognizes the corrupting nature of power, and accordingly advocates reasonable checks and balances. That is what I call realism. So I will race Jon to the trademark office for who really deserves the title of realist!

Authors

Image Source: © Andrew Kelly / Reuters
      




pro

Using Crowd-Sourced Mapping to Improve Representation and Detect Gerrymanders in Ohio

Analysis of dozens of publicly created redistricting plans shows that map-making technology can improve political representation and detect a gerrymander.  In 2012, President Obama won the vote in Ohio by three percentage points, while Republicans held a 13-to-5 majority in Ohio’s delegation to the U.S. House. After redistricting in 2013, Republicans held 12 of Ohio’s…

      
 
 




pro

How Promise programs can help former industrial communities

The nation is seeing accelerating gaps in economic opportunity and prosperity between more educated, tech-savvy, knowledge workers congregating in the nation’s “superstar” cities (and a few university-town hothouses) and residents of older industrial cities and the small towns of “flyover country.” These growing divides are shaping public discourse, as policymakers and thought leaders advance recipes…

       




pro

Implementing Common Core: The problem of instructional time


This is part two of my analysis of instruction and Common Core’s implementation.  I dubbed the three-part examination of instruction “The Good, The Bad, and the Ugly.”  Having discussed “the “good” in part one, I now turn to “the bad.”  One particular aspect of the Common Core math standards—the treatment of standard algorithms in whole number arithmetic—will lead some teachers to waste instructional time.

A Model of Time and Learning

In 1963, psychologist John B. Carroll published a short essay, “A Model of School Learning” in Teachers College Record.  Carroll proposed a parsimonious model of learning that expressed the degree of learning (or what today is commonly called achievement) as a function of the ratio of time spent on learning to the time needed to learn.     

The numerator, time spent learning, has also been given the term opportunity to learn.  The denominator, time needed to learn, is synonymous with student aptitude.  By expressing aptitude as time needed to learn, Carroll refreshingly broke through his era’s debate about the origins of intelligence (nature vs. nurture) and the vocabulary that labels students as having more or less intelligence. He also spoke directly to a primary challenge of teaching: how to effectively produce learning in classrooms populated by students needing vastly different amounts of time to learn the exact same content.[i] 

The source of that variation is largely irrelevant to the constraints placed on instructional decisions.  Teachers obviously have limited control over the denominator of the ratio (they must take kids as they are) and less than one might think over the numerator.  Teachers allot time to instruction only after educational authorities have decided the number of hours in the school day, the number of days in the school year, the number of minutes in class periods in middle and high schools, and the amount of time set aside for lunch, recess, passing periods, various pull-out programs, pep rallies, and the like.  There are also announcements over the PA system, stray dogs that may wander into the classroom, and other unscheduled encroachments on instructional time.

The model has had a profound influence on educational thought.  As of July 5, 2015, Google Scholar reported 2,931 citations of Carroll’s article.  Benjamin Bloom’s “mastery learning” was deeply influenced by Carroll.  It is predicated on the idea that optimal learning occurs when time spent on learning—rather than content—is allowed to vary, providing to each student the individual amount of time he or she needs to learn a common curriculum.  This is often referred to as “students working at their own pace,” and progress is measured by mastery of content rather than seat time. David C. Berliner’s 1990 discussion of time includes an analysis of mediating variables in the numerator of Carroll’s model, including the amount of time students are willing to spend on learning.  Carroll called this persistence, and Berliner links the construct to student engagement and time on task—topics of keen interest to researchers today.  Berliner notes that although both are typically described in terms of motivation, they can be measured empirically in increments of time.     

Most applications of Carroll’s model have been interested in what happens when insufficient time is provided for learning—in other words, when the numerator of the ratio is significantly less than the denominator.  When that happens, students don’t have an adequate opportunity to learn.  They need more time. 

As applied to Common Core and instruction, one should also be aware of problems that arise from the inefficient distribution of time.  Time is a limited resource that teachers deploy in the production of learning.  Below I discuss instances when the CCSS-M may lead to the numerator in Carroll’s model being significantly larger than the denominator—when teachers spend more time teaching a concept or skill than is necessary.  Because time is limited and fixed, wasted time on one topic will shorten the amount of time available to teach other topics.  Excessive instructional time may also negatively affect student engagement.  Students who have fully learned content that continues to be taught may become bored; they must endure instruction that they do not need.

Standard Algorithms and Alternative Strategies

Jason Zimba, one of the lead authors of the Common Core Math standards, and Barry Garelick, a critic of the standards, had a recent, interesting exchange about when standard algorithms are called for in the CCSS-M.  A standard algorithm is a series of steps designed to compute accurately and quickly.  In the U.S., students are typically taught the standard algorithms of addition, subtraction, multiplication, and division with whole numbers.  Most readers of this post will recognize the standard algorithm for addition.  It involves lining up two or more multi-digit numbers according to place-value, with one number written over the other, and adding the columns from right to left with “carrying” (or regrouping) as needed.

The standard algorithm is the only algorithm required for students to learn, although others are mentioned beginning with the first grade standards.  Curiously, though, CCSS-M doesn’t require students to know the standard algorithms for addition and subtraction until fourth grade.  This opens the door for a lot of wasted time.  Garelick questioned the wisdom of teaching several alternative strategies for addition.  He asked whether, under the Common Core, only the standard algorithm could be taught—or at least, could it be taught first. As he explains:

Delaying teaching of the standard algorithm until fourth grade and relying on place value “strategies” and drawings to add numbers is thought to provide students with the conceptual understanding of adding and subtracting multi-digit numbers. What happens, instead, is that the means to help learn, explain or memorize the procedure become a procedure unto itself and students are required to use inefficient cumbersome methods for two years. This is done in the belief that the alternative approaches confer understanding, so are superior to the standard algorithm. To teach the standard algorithm first would in reformers’ minds be rote learning. Reformers believe that by having students using strategies in lieu of the standard algorithm, students are still learning “skills” (albeit inefficient and confusing ones), and these skills support understanding of the standard algorithm. Students are left with a panoply of methods (praised as a good thing because students should have more than one way to solve problems), that confuse more than enlighten. 

 

Zimba responded that the standard algorithm could, indeed, be the only method taught because it meets a crucial test: reinforcing knowledge of place value and the properties of operations.  He goes on to say that other algorithms also may be taught that are consistent with the standards, but that the decision to do so is left in the hands of local educators and curriculum designers:

In short, the Common Core requires the standard algorithm; additional algorithms aren’t named, and they aren’t required…Standards can’t settle every disagreement—nor should they. As this discussion of just a single slice of the math curriculum illustrates, teachers and curriculum authors following the standards still may, and still must, make an enormous range of decisions.

 

Zimba defends delaying mastery of the standard algorithm until fourth grade, referring to it as a “culminating” standard that he would, if he were teaching, introduce in earlier grades.  Zimba illustrates the curricular progression he would employ in a table, showing that he would introduce the standard algorithm for addition late in first grade (with two-digit addends) and then extend the complexity of its use and provide practice towards fluency until reaching the culminating standard in fourth grade. Zimba would introduce the subtraction algorithm in second grade and similarly ramp up its complexity until fourth grade.

 

It is important to note that in CCSS-M the word “algorithm” appears for the first time (in plural form) in the third grade standards:

 

3.NBT.2  Fluently add and subtract within 1000 using strategies and algorithms based on place value, properties of operations, and/or the relationship between addition and subtraction.

 

The term “strategies and algorithms” is curious.  Zimba explains, “It is true that the word ‘algorithms’ here is plural, but that could be read as simply leaving more choice in the hands of the teacher about which algorithm(s) to teach—not as a requirement for each student to learn two or more general algorithms for each operation!” 

 

I have described before the “dog whistles” embedded in the Common Core, signals to educational progressives—in this case, math reformers—that  despite these being standards, the CCSS-M will allow them great latitude.  Using the plural “algorithms” in this third grade standard and not specifying the standard algorithm until fourth grade is a perfect example of such a dog whistle.

 

Why All the Fuss about Standard Algorithms?

It appears that the Common Core authors wanted to reach a political compromise on standard algorithms. 

 

Standard algorithms were a key point of contention in the “Math Wars” of the 1990s.   The 1997 California Framework for Mathematics required that students know the standard algorithms for all four operations—addition, subtraction, multiplication, and division—by the end of fourth grade.[ii]  The 2000 Massachusetts Mathematics Curriculum Framework called for learning the standard algorithms for addition and subtraction by the end of second grade and for multiplication and division by the end of fourth grade.  These two frameworks were heavily influenced by mathematicians (from Stanford in California and Harvard in Massachusetts) and quickly became favorites of math traditionalists.  In both states’ frameworks, the standard algorithm requirements were in direct opposition to the reform-oriented frameworks that preceded them—in which standard algorithms were barely mentioned and alternative algorithms or “strategies” were encouraged. 

 

Now that the CCSS-M has replaced these two frameworks, the requirement for knowing the standard algorithms in California and Massachusetts slips from third or fourth grade all the way to sixth grade.  That’s what reformers get in the compromise.  They are given a green light to continue teaching alternative algorithms, as long as the algorithms are consistent with teaching place value and properties of arithmetic.  But the standard algorithm is the only one students are required to learn.  And that exclusivity is intended to please the traditionalists.

 

I agree with Garelick that the compromise leads to problems.  In a 2013 Chalkboard post, I described a first grade math program in which parents were explicitly requested not to teach the standard algorithm for addition when helping their children at home.  The students were being taught how to represent addition with drawings that clustered objects into groups of ten.  The exercises were both time consuming and tedious.  When the parents met with the school principal to discuss the matter, the principal told them that the math program was following the Common Core by promoting deeper learning.  The parents withdrew their child from the school and enrolled him in private school.

 

The value of standard algorithms is that they are efficient and packed with mathematics.  Once students have mastered single-digit operations and the meaning of place value, the standard algorithms reveal to students that they can take procedures that they already know work well with one- and two-digit numbers, and by applying them over and over again, solve problems with large numbers.  Traditionalists and reformers have different goals.  Reformers believe exposure to several algorithms encourages flexible thinking and the ability to draw on multiple strategies for solving problems.  Traditionalists believe that a bigger problem than students learning too few algorithms is that too few students learn even one algorithm.

 

I have been a critic of the math reform movement since I taught in the 1980s.  But some of their complaints have merit.  All too often, instruction on standard algorithms has left out meaning.  As Karen C. Fuson and Sybilla Beckmann point out, “an unfortunate dichotomy” emerged in math instruction: teachers taught “strategies” that implied understanding and “algorithms” that implied procedural steps that were to be memorized.  Michael Battista’s research has provided many instances of students clinging to algorithms without understanding.  He gives an example of a student who has not quite mastered the standard algorithm for addition and makes numerous errors on a worksheet.  On one item, for example, the student forgets to carry and calculates that 19 + 6 = 15.  In a post-worksheet interview, the student counts 6 units from 19 and arrives at 25.  Despite the obvious discrepancy—(25 is not 15, the student agrees)—he declares that his answers on the worksheet must be correct because the algorithm he used “always works.”[iii] 

 

Math reformers rightfully argue that blind faith in procedure has no place in a thinking mathematical classroom. Who can disagree with that?  Students should be able to evaluate the validity of answers, regardless of the procedures used, and propose alternative solutions.  Standard algorithms are tools to help them do that, but students must be able to apply them, not in a robotic way, but with understanding.

 

Conclusion

Let’s return to Carroll’s model of time and learning.  I conclude by making two points—one about curriculum and instruction, the other about implementation.

In the study of numbers, a coherent K-12 math curriculum, similar to that of the previous California and Massachusetts frameworks, can be sketched in a few short sentences.  Addition with whole numbers (including the standard algorithm) is taught in first grade, subtraction in second grade, multiplication in third grade, and division in fourth grade.  Thus, the study of whole number arithmetic is completed by the end of fourth grade.  Grades five through seven focus on rational numbers (fractions, decimals, percentages), and grades eight through twelve study advanced mathematics.  Proficiency is sought along three dimensions:  1) fluency with calculations, 2) conceptual understanding, 3) ability to solve problems.

Placing the CCSS-M standard for knowing the standard algorithms of addition and subtraction in fourth grade delays this progression by two years.  Placing the standard for the division algorithm in sixth grade continues the two-year delay.   For many fourth graders, time spent working on addition and subtraction will be wasted time.  They already have a firm understanding of addition and subtraction.  The same thing for many sixth graders—time devoted to the division algorithm will be wasted time that should be devoted to the study of rational numbers.  The numerator in Carroll’s instructional time model will be greater than the denominator, indicating the inefficient allocation of time to instruction.

As Jason Zimba points out, not everyone agrees on when the standard algorithms should be taught, the alternative algorithms that should be taught, the manner in which any algorithm should be taught, or the amount of instructional time that should be spent on computational procedures.  Such decisions are made by local educators.  Variation in these decisions will introduce variation in the implementation of the math standards.  It is true that standards, any standards, cannot control implementation, especially the twists and turns in how they are interpreted by educators and brought to life in classroom instruction.  But in this case, the standards themselves are responsible for the myriad approaches, many unproductive, that we are sure to see as schools teach various algorithms under the Common Core.


[i] Tracking, ability grouping, differentiated learning, programmed learning, individualized instruction, and personalized learning (including today’s flipped classrooms) are all attempts to solve the challenge of student heterogeneity.  

[ii] An earlier version of this post incorrectly stated that the California framework required that students know the standard algorithms for all four operations by the end of third grade. I regret the error.

[iii] Michael T. Battista (2001).  “Research and Reform in Mathematics Education,” pp. 32-84 in The Great Curriculum Debate: How Should We Teach Reading and Math? (T. Loveless, ed., Brookings Instiution Press).

Authors

     
 
 




pro

The NAEP proficiency myth


On May 16, I got into a Twitter argument with Campbell Brown of The 74, an education website.  She released a video on Slate giving advice to the next president.  The video begins: “Without question, to me, the issue is education. Two out of three eighth graders in this country cannot read or do math at grade level.”  I study student achievement and was curious.  I know of no valid evidence to make the claim that two out of three eighth graders are below grade level in reading and math.  No evidence was cited in the video.  I asked Brown for the evidentiary basis of the assertion.  She cited the National Assessment of Educational Progress (NAEP).

NAEP does not report the percentage of students performing at grade level.  NAEP reports the percentage of students reaching a “proficient” level of performance.  Here’s the problem. That’s not grade level. 

In this post, I hope to convince readers of two things:

1.  Proficient on NAEP does not mean grade level performance.  It’s significantly above that.
2.  Using NAEP’s proficient level as a basis for education policy is a bad idea.

Before going any further, let’s look at some history.

NAEP history 

NAEP was launched nearly five decades ago.  The first NAEP test was given in science in 1969, followed by a reading test in 1971 and math in 1973.  For the first time, Americans were able to track the academic progress of the nation’s students.  That set of assessments, which periodically tests students 9, 13, and 17 years old and was last given in 2012, is now known as the Long Term Trend (LTT) NAEP. 

It was joined by another set of NAEP tests in the 1990s.  The Main NAEP assesses students by grade level (fourth, eighth, and twelfth) and, unlike the LTT, produces not only national but also state scores.  The two tests, LTT and main, continue on parallel tracks today, and they are often confounded by casual NAEP observers.  The main NAEP, which was last administered in 2015, is the test relevant to this post and will be the only one discussed hereafter.  The NAEP governing board was concerned that the conventional metric for reporting results (scale scores) was meaningless to the public, so achievement standards (also known as performance standards) were introduced.  The percentage of students scoring at advanced, proficient, basic, and below basic levels are reported each time the main NAEP is given.

Does NAEP proficient mean grade level? 

The National Center for Education Statistics (NCES) states emphatically, “Proficient is not synonymous with grade level performance.” The National Assessment Governing Board has a brochure with information on NAEP, including a section devoted to myths and facts.  There, you will find this:

Myth: The NAEP Proficient level is like being on grade level.

 

Fact: Proficient on NAEP means competency over challenging subject matter.  This is not the same thing as being “on grade level,” which refers to performance on local curriculum and standards. NAEP is a general assessment of knowledge and skills in a particular subject.

Equating NAEP proficiency with grade level is bogus.  Indeed, the validity of the achievement levels themselves is questionable.  They immediately came under fire in reviews by the U.S. Government Accountability Office, the National Academy of Sciences, and the National Academy of Education.[1]  The National Academy of Sciences report was particularly scathing, labeling NAEP’s achievement levels as “fundamentally flawed.”

Despite warnings of NAEP authorities and critical reviews from scholars, some commentators, typically from advocacy groups, continue to confound NAEP proficient with grade level.  Organizations that support school reform, such as Achieve Inc. and Students First, prominently misuse the term on their websites.  Achieve presses states to adopt cut points aligned with NAEP proficient as part of new Common Core-based accountability systems.  Achieve argues that this will inform parents whether children “can do grade level work.” No, it will not.  That claim is misleading.

How unrealistic is NAEP proficient? 

Shortly after NCLB was signed into law, Robert Linn, one of the most prominent psychometricians of the past several decades, called ”the target of 100% proficient or above according to the NAEP standards more like wishful thinking than a realistic possibility.”  History is on the side of that argument.  When the first main NAEP in mathematics was given in 1990, only 13 % of eighth graders scored proficient and 2 % scored advanced.  Imagine using “proficient” as synonymous with grade level—85 % scored below grade level! 

The 1990 national average in eighth grade scale scores was 263 (see Table 1).  In 2015, the average was 282, a gain of 19 scale score points.

Table 1.  Main NAEP Eighth Grade Math Score, by achievement levels, 1990-2015

Year

Scale Score Average

Below Basic (%)

Basic

Proficient

Advanced

Proficient and Above

2015

282

29

38

25

8

33

2009

283

27

39

26

8

34

2003

278

32

39

23

5

28

1996

270

39

38

20

4

24

1990

263

48

37

13

2

15

That’s an impressive gain.  Analysts who study NAEP often use 10 points on the NAEP scale as a back of the envelope estimate of one year’s worth of learning.  Eighth graders have gained almost two years.  The percentage of students scoring below basic has dropped from 48%  in 1990 to 29% in 2015.  The percentage of students scoring proficient or above has more than doubled, from 15% to 33%.  That’s not bad news; it’s good news.

But the cut point for NAEP proficient is 299.  By that standard, two-thirds of eighth graders are still falling short.  Even students in private schools, despite hailing from more socioeconomically advantaged homes and in some cases being selectively admitted by schools, fail miserably at attaining NAEP proficiency.  More than half (53 percent) are below proficient. 

Today’s eighth graders have made it about half-way to NAEP proficient in 25 years, but they still need to gain almost two more years of math learning (17 points) to reach that level.  And, don’t forget, that’s just the national average, so even when that lofty goal is achieved, half of the nation’s students will still fall short of proficient.  Advocates of the NAEP proficient standard want it to be for all students.  That is ridiculous.  Another way to think about it: proficient for today’s eighth graders reflects approximately what the average twelfth grader knew in mathematics in 1990.   Someday the average eighth grader may be able to do that level of mathematics.  But it won’t be soon, and it won’t be every student.

In the 2007 Brown Center Report on American Education, I questioned whether NAEP proficient is a reasonable achievement standard.[2]  That year, a study by Gary Phillips of American Institutes for Research was published that projected the 2007 TIMSS scores on the NAEP scale.  Phillips posed the question: based on TIMSS, how many students in other countries would score proficient or better on NAEP?  The study’s methodology only produces approximations, but they are eye-popping.

Here are just a few countries:

Table 2.  Projected Percent NAEP Proficient, Eighth Grade Math

Singapore

73

Hong Kong SAR

66

Korea, Rep. of

65

Chinese Taipei

61

Japan

57

Belgium (Flemish)

40

United States

26

Israel

24

England

22

Italy

17

Norway

9 

Singapore was the top scoring nation on TIMSS that year, but even there, more than a quarter of students fail to reach NAEP proficient.  Japan is not usually considered a slouch on international math assessments, but 43% of its eighth graders fall short.  The U.S. looks weak, with only 26% of students proficient.  But England, Israel, and Italy are even weaker.  Norway, a wealthy nation with per capita GDP almost twice that of the U.S., can only get 9 out of 100 eighth graders to NAEP proficient.

Finland isn’t shown in the table because it didn’t participate in the 2007 TIMSS.  But it did in 2011, with Finland and the U.S. scoring about the same in eighth grade math.  Had Finland’s eighth graders taken NAEP in 2011, it’s a good bet that the proportion scoring below NAEP proficient would have been similar to that in the U.S.  And yet articles such as “Why Finland Has the Best Schools,” appear regularly in the U.S. press.[3]

Why it matters

The National Center for Education Statistics warns that federal law requires that NAEP achievement levels be used on a trial basis until the Commissioner of Education Statistics determines that the achievement levels are “reasonable, valid, and informative to the public.”  As the NCES website states, “So far, no Commissioner has made such a determination, and the achievement levels remain in a trial status.  The achievement levels should continue to be interpreted and used with caution.”

Confounding NAEP proficient with grade-level is uninformed.  Designating NAEP proficient as the achievement benchmark for accountability systems is certainly not cautious use.  If high school students are required to meet NAEP proficient to graduate from high school, large numbers will fail. If middle and elementary school students are forced to repeat grades because they fall short of a standard anchored to NAEP proficient, vast numbers will repeat grades.    

On NAEP, students are asked the highest level math course they’ve taken.  On the 2015 twelfth grade NAEP, 19% of students said they either were taking or had taken calculus.   These are the nation’s best and the brightest, the crème-de la crème of math students.  Only one in five students work their way that high up the hierarchy of American math courses.  If you are over 45 years old and reading this, the proportion who took calculus in high school is less than one out of ten.  In the graduating class of 1990, for instance, only 7% of students had taken calculus.[4] 

Unsurprisingly, calculus students are also typically taught by the nation’s most knowledgeable math teachers.  The nation’s elite math students paired with the nation’s elite math teachers: if any group can prove NAEP proficient a reasonable goal and succeed in getting all students over the NAEP proficiency bar, this is the group. 

But they don’t.  A whopping 30% score below proficient on NAEP.  For black and Hispanic calculus students, the figures are staggering.  Two-thirds of black calculus students score below NAEP proficient.  For Hispanics, the figure is 52%.  The nation’s pre-calculus students also fair poorly (69% below proficient). Then the success rate falls off a cliff.  In the class of 2015, more than nine out of ten students whose highest math course was Trigonometry or Algebra II fail to meet the NAEP proficient standard.

Table 3.  2015 NAEP Twelfth Grade Math, Proficient by Highest Math Course Taken

Highest Math Course Taken

Percentage Below NAEP Proficient

Calculus

30

Pre-calculus

69

Trig/Algebra II

92

Source: NAEP Data Explorer

These data defy reason; they also refute common sense.  For years, educators have urged students to take the toughest courses they can possibly take.  Taken at face value, the data in Table 3 rip the heart out of that advice.  These are the toughest courses, and yet huge numbers of the nation’s star students, by any standard aligned with NAEP proficient, would be told that they have failed.  Some parents, misled by the confounding of proficient with grade level, might even mistakenly believe that their kids don’t know grade level math.

Conclusion 

NAEP proficient is not synonymous with grade level.  NAEP officials urge that proficient not be interpreted as reflecting grade level work.  It is a standard set much higher than that.  Scholarly panels have reviewed the NAEP achievement standards and found them flawed.  The highest scoring nations of the world would appear to be mediocre or poor performers if judged by the NAEP proficient standard.  Even large numbers of U.S. calculus students fall short.

As states consider building benchmarks for student performance into accountability systems, they should not use NAEP proficient—or any standard aligned with NAEP proficient—as a benchmark.  It is an unreasonable expectation, one that ill serves America’s students, parents, and teachers--and the effort to improve America’s schools.


[1] Shepard, L. A., Glaser, R., Linn, R., & Bohrnstedt, G. (1993) Setting Performance Standards For Student Achievement: Background Studies. Report of the NAE Panel on the Evaluation of the NAEP Trial State Assessment: An Evaluation of the 1992 Achievement Levels. National Academy of Education. 

[2] Loveless, Tom.  The 2007 Brown Center Report, pages 10-13.

[3] William Doyle, “Why Finland Has The Best Schools,” Los Angeles Times, March 18, 2016.

[4] NCES, America’s High School Graduates: Results of the 2009 NAEP High School Transcript Study.  See Table 8, p. 49.

Authors

Image Source: © Brian Snyder / Reuters
      
 
 




pro

Obama’s exit calculus on the peace process

One issue that has traditionally shared bipartisan support is how the United States should approach the Israeli-Palestinian conflict, write Sarah Yerkes and Ariella Platcha. However, this year both parties have shifted their positions farther from the center and from past Democratic and Republican platforms. How will that affect Obama’s strategy?

      
 
 




pro

Minding the gap: A multi-layered approach to tackling violent extremism

      
 
 




pro

An agenda for reducing poverty and improving opportunity


SUMMARY:
With the U.S. poverty rate stuck at around 15 percent for years, it’s clear that something needs to change, and candidates need to focus on three pillars of economic advancement-- education, work, family -- to increase economic mobility, according to Brookings Senior Fellow Isabel Sawhill and Senior Research Assistant Edward Rodrigue.

“Economic success requires people’s initiative, but it also requires us, as a society, to untangle the web of disadvantages that make following the sequence difficult for some Americans. There are no silver bullets. Government cannot do this alone. But government has a role to play in motivating individuals and facilitating their climb up the economic ladder,” they write.

The pillar of work is the most urgent, they assert, with every candidate needing to have concrete jobs proposals. Closing the jobs gap (the difference in work rates between lower and higher income households) has a huge effect on the number of people in poverty, even if the new workers hold low-wage jobs. Work connects people to mainstream institutions, helps them learn new skills, provides structure to their lives, and provides a sense of self-sufficiency and self-respect, while at the aggregate level, it is one of the most important engines of economic growth. Specifically, the authors advocate for making work pay (EITC), a second-earner deduction, childcare assistance and paid leave, and transitional job programs. On the education front, they suggest investment in children at all stages of life: home visiting, early childhood education, new efforts in the primary grades, new kinds of high schools, and fresh policies aimed at helping students from poor families attend and graduate from post-secondary institutions. And for the third prong, stable families, Sawhill and Rodrique suggest changing social norms around the importance of responsible, two-person parenthood, as well as making the most effective forms of birth control (IUDs and implants) more widely available at no cost to women.

“Many of our proposals would not only improve the life prospects of less advantaged children; they would pay for themselves in higher taxes and less social spending. The candidates may have their own blend of responses, but we need to hear less rhetoric and more substantive proposals from all of them,” they conclude.

Downloads

Authors

     
 
 




pro

Campaign 2016: Ideas for reducing poverty and improving economic mobility


We can be sure that the 2016 presidential candidates, whoever they are, will be in favor of promoting opportunity and cutting poverty. The question is: how? In our contribution to a new volume published today, “Campaign 2016: Eight big issues the presidential candidates should address,” we show that people who clear three hurdles—graduating high school, working full-time, and delaying parenthood until they in a stable, two-parent family—are very much more likely to climb to middle class than fall into poverty:

But what specific policies would help people achieve these three benchmarks of success?  Our paper contains a number of ideas that candidates might want to adopt. Here are a few examples: 

1. To improve high school graduation rates, expand “Small Schools of Choice,” a program in New York City, which replaced large, existing schools with more numerous, smaller schools that had a theme or focus (like STEM or the arts). The program increased graduation rates by about 10 percentage points and also led to higher college enrollment with no increase in costs.

2. To support work, make the Child and Dependent Care Tax Credit (CDCTC) refundable and cap it at $100,000 in household income. Because the credit is currently non-refundable, low-income families receive little or no benefit, while those with incomes above $100,000 receive generous tax deductions. This proposal would make the program more equitable and facilitate low-income parents’ labor force participation, at no additional cost.

3. To strengthen families, make the most effective forms of birth control (IUDs and implants) more widely available at no cost to women, along with good counselling and a choice of all FDA-approved methods. Programs that have done this in selected cities and states have reduced unplanned pregnancies, saved money, and given women better ability to delay parenthood until they and their partners are ready to be parents. Delayed childbearing reduces poverty rates and leads to better prospects for the children in these families.

These are just a few examples of good ideas, based on the evidence, of what a candidate might want to propose and implement if elected. Additional ideas and analysis will be found in our longer paper on this topic.

Authors

Image Source: © Darren Hauck / Reuters
     
 
 




pro

The District’s proposed law shows the wrong way to provide paid leave


The issue of paid leave is heating up in 2016. At least two presidential candidates — Democrat Hillary Clinton and Republican Sen. Marco Rubio (Fla.) — have proposed new federal policies. Several states and large cities have begun providing paid leave to workers when they are ill or have to care for a newborn child or other family member.

This forward movement on paid-leave policy makes sense. The United States is the only advanced country without a paid-leave policy. While some private and public employers already provide paid leave to their workers, the workers least likely to get paid leave are low-wage and low-income workers who need it most. They also cannot afford to take unpaid leave, which the federal government mandates for larger companies.

Paid leave is good for the health and development of children; it supports work, enabling employees to remain attached to the labor force when they must take leave; and it can lower costly worker turnover for employers. Given the economic and social benefits it provides and given that the private market will not generate as much as needed, public policies should ensure that such leave is available to all.

But it is important to do so efficiently, so as not to burden employers with high costs that could lead them to substantially lower wages or create fewer jobs.

States and cities that require employers to provide paid sick days mandate just a small number, usually three to seven days. Family or temporary disability leaves that must be longer are usually financed through small increases in payroll taxes paid by workers and employers, rather than by employer mandates or general revenue.

Policy choices could limit costs while expanding benefits. For instance, states should limit eligibility to workers with experience, such as a year, and it might make sense to increase the benefit with years of accrued service to encourage labor force attachment. Some states provide four to six weeks of family leave, though somewhat larger amounts of time may be warranted, especially for the care of newborns, where three months seems reasonable.

Paid leave need not mean full replacement of existing wages. Replacing two-thirds of weekly earnings up to a set limit is reasonable. The caps and partial wage replacement give workers some incentive to limit their use of paid leave without imposing large financial burdens on those who need it most.

While many states and localities have made sensible choices in these areas, some have not. For instance, the D.C. Council has proposed paid-leave legislation for all but federal workers that violates virtually all of these rules. It would require up to 16 weeks of temporary disability leave and up to 16 weeks of paid family leave; almost all workers would be eligible for coverage, without major experience requirements; and the proposed law would require 100 percent replacement of wages up to $1,000 per week, and 50 percent coverage up to $3,000. It would be financed through a progressive payroll tax on employers only, which would increase to 1 percent for higher-paid employees.

Our analysis suggests that this level of leave would be badly underfunded by the proposed tax, perhaps by as much as two-thirds. Economists believe that payroll taxes on employers are mostly paid through lower worker wages, so the higher taxes needed to fully fund such generous leave would burden workers. The costly policy might cause employers to discriminate against women.

The disruptions and burdens of such lengthy leaves could cause employers to hire fewer workers or shift operations elsewhere over time. This is particularly true here, considering that the D.C. Council already has imposed costly burdens on employers, such as high minimum wages (rising to $11.50 per hour this year), paid sick leave (although smaller amounts than now proposed) and restrictions on screening candidates. The minimum wage in Arlington is $7.25 with no other mandates. Employers will be tempted to move operations across the river or to replace workers with technology wherever possible.

Cities, states and the federal government should provide paid sick and family leave for all workers. But it can and should be done in a fiscally responsible manner that does not place undue burdens on the workers themselves or on their employers.


Editor's note: this piece originally appeared in The Washington Post

Publication: The Washington Post
Image Source: © Charles Platiau / Reuters
     
 
 




pro

Social mobility: A promise that could still be kept


As a rhetorical ideal, greater opportunity is hard to beat. Just about all candidates for high elected office declare their commitments to promoting opportunity – who, after all, could be against it? But opportunity is, to borrow a term from the philosopher and political theorist Isaiah Berlin, a "protean" word, with different meanings for different people at different times.

Typically, opportunity is closely entwined with an idea of upward mobility, especially between generations. The American Dream is couched in terms of a daughter or son of bartenders or farm workers becoming a lawyer, or perhaps even a U.S. senator. But even here, there are competing definitions of upward mobility.

It might mean being better off than your parents were at a similar age. This is what researchers call "absolute mobility," and largely relies on economic growth – the proverbial rising tide that raises most boats.

Or it could mean moving to a higher rung of the ladder within society, and so ending up in a better relative position than one's parents.

Scholars label this movement "relative mobility." And while there are many ways to think about status or standard of living – education, wealth, health, occupation – the most common yardstick is household income at or near middle age (which, somewhat depressingly, tends to be defined as 40).

As a basic principle, we ought to care about both kinds of mobility as proxies for opportunity. We want children to have the chance to do absolutely and relatively well in comparison to their parents.

On the One Hand…

So how are we doing? The good news is that economic standards of living have improved over time. Most children are therefore better off than their parents. Among children born in the 1970s and 1980s, 84 percent had higher incomes (even after adjusting for inflation) than their parents did at a similar age, according to a Pew study. Absolute upward income mobility, then, has been strong, and has helped children from every income class, especially those nearer the bottom of the ladder. More than 9 in 10 of those born into families in the bottom fifth of the income distribution have been upwardly mobile in this absolute sense.

There's a catch, though. Strong absolute mobility goes hand in hand with strong economic growth. So it is quite likely that these rates of generational progress will slow, since the potential growth rate of the economy has probably diminished. This risk is heightened by an increasingly unequal division of the proceeds of growth in recent years. Today's parents are certainly worried. Surveys show that they are far less certain than earlier cohorts that their children will be better off than they are.

If the story on absolute mobility may be about to turn for the worse, the picture for relative mobility is already pretty bad. The basic message here: pick your parents carefully. If you are born to parents in the poorest fifth of the income distribution, your chance of remaining stuck in that income group is around 35 to 40 percent. If you manage to be born into a higher-income family, the chances are similarly good that you will remain there in adulthood.

It would be wrong, however, to say that class positions are fixed. There is still a fair amount of fluidity or social mobility in America – just not as much as most people seem to believe or want. Relative mobility is especially sticky in the tails at the high and low end of the distribution. Mobility is also considerably lower for blacks than for whites, with blacks much less likely to escape from the bottom rungs of the ladder. Equally ominously, they are much more likely to fall down from the middle quintile.

Relative mobility rates in the United States are lower than the rhetoric about equal opportunity might suggest and lower than people believe. But are they getting worse? Current evidence suggests not. In fact, the trend line for relative mobility has been quite flat for the past few decades, according to work by Raj Chetty of Stanford and his co-researchers. It is simply not the case that the amount of intergenerational relative mobility has declined over time.

Whether this will remain the case as the generations of children exposed to growing income inequality mature is not yet clear, though. As one of us (Sawhill) has noted, when the rungs on the ladder of opportunity grow further apart, it becomes more difficult to climb the ladder. To the same point, in his latest book, Our Kids – The American Dream in Crisis, Robert Putnam of Harvard argues that the growing gaps not just in income but also in neighborhood conditions, family structure, parenting styles and educational opportunities will almost inevitably lead to less social mobility in the future. Indeed, these multiple disadvantages or advantages are increasingly clustered, making it harder for children growing up in disadvantaged circumstances to achieve the dream of becoming middle class.

The Geography of Opportunity

Another way to assess the amount of mobility in the United States is to compare it to that found in other high-income nations. Mobility rates are highest in Scandinavia and lowest in the United States, Britain and Italy, with Australia, Western Europe and Canada lying somewhere in between, according to analyses by Jo Blanden, of the University of Surrey and Miles Corak of the University of Ottawa. Interestingly, the most recent research suggests that the United States stands out most for its lack of downward mobility from the top. Or, to paraphrase Billie Holiday, God blesses the child that's got his own.

Any differences among countries, while notable, are more than matched by differences within Pioneering work (again by Raj Chetty and his colleagues) shows that some cities have much higher rates of upward mobility than others. From a mobility perspective, it is better to grow up in San Francisco, Seattle or Boston than in Atlanta, Baltimore or Detroit. Families that move to these high-mobility communities when their children are still relatively young enhance the chances that the children will have more education and higher incomes in early adulthood. Greater mobility can be found in places with better schools, fewer single parents, greater social capital, lower income inequality and less residential segregation. However, the extent to which these factors are causes rather than simply correlates of higher or lower mobility is not yet known. Scholarly efforts to establish why it is that some children move up the ladder and others don't are still in their infancy.

Models of Mobility

What is it about their families, their communities and their own characteristics that determine why they do or do not achieve some measure of success later in life?

To help get at this vital question, the Brookings Institution has created a life-cycle model of children's trajectories, using data from the National Longitudinal Survey of Youth on about 5,000 children from birth to age 40. (The resulting Social Genome Model is now a partnership among three institutions: Brookings, the Urban Institute and Child Trends). Our model tracks children's progress through multiple life stages with a corresponding set of success measures at the end of each. For example, children are considered successful at the end of elementary school if they have mastered basic reading and math skills and have acquired the behavioral or non-cognitive competencies that have been shown to predict later success. At the end of adolescence, success is measured by whether the young person has completed high school with a GPA average of 2.5 or better and has not been convicted of a crime or had a baby as a teenager.

These metrics capture common-sense intuition about what drives success. But they are also aligned with the empirical evidence on life trajectories. Educational achievement, for example, has a strong effect on later earnings and income, and this well-known linkage is reflected in the model. We have worked hard to adjust for confounding variables but cannot be sure that all such effects are truly causal. We do know that the model does a good job of predicting or projecting later outcomes.

Three findings from the model stand out. First, it's clear that success is a cumulative process. According to our measures, a child who is ready for school at age 5 is almost twice as likely to be successful at the end of elementary school as one who is not.

This doesn't mean that a life course is set in stone this early, however.

Children who get off track at an early age frequently get back on track at a later age; it's just that their chances are not nearly as good. So this is a powerful argument for intervening early in life. But it is not an argument for giving up on older youth.

Second, the chances of clearing our last hurdle – being middle class by middle age (specifically, having an income of around $68,000 for a family of four by age 40) – vary quite significantly. A little over half of all children born in the 1980s and 1990s achieved this goal. But those who are black or born into low-income families were very much less likely than others to achieve this benchmark.

Third, the effect of a child's circumstances at birth is strong. We use a multidimensional measure here, including not just the family's income but also the mother's education, the marital status of the parents and the birth weight of the child. Together, these factors have substantial effects on a child's subsequent success. Maternal education seems especially important.

The Social Genome Model, then, is a useful tool for looking under the hood at why some children succeed and others don't. But it can also be used to assess the likely impact of a variety of interventions designed to improve upward mobility. For one illustrative simulation, we hand-picked a battery of programs shown to be effective at different life stages – a parenting program, a high-quality early-edcation program, a reading and socio-emotional learning program in elementary school, a comprehensive high school reform model – and assessed the possible impact for low-income children benefiting from each of them, or all of them.

No single program does very much to close the gap between children from lower- and higher-income families. But the combined effects of multiple programs – that is, from intervening early and often in a child's life – has a surprisingly big impact. The gap of almost 20 percentage points in the chances of low-income and high-income children reaching the middle class shrinks to six percentage points. In other words, we are able to close about two-thirds of the initial gap in the life chances of these two groups of children. The black-white gap narrows, too.

Looking at the cumulative impact on adult incomes over a working life (all appropriately discounted with time) and comparing these lifetime income benefits to the costs of the programs, we believe that such investments would pass a cost-benefit test from the perspective of society as a whole and even from the narrower prospective of the taxpayers who fund the programs.

What Now?

Understanding the processes that lie beneath the patterns of social mobility is critical. It is not enough to know how good the odds of escaping are for a child born into poverty. We want to know why. We can never eliminate the effects of family background on an individual's life chances. But the wide variation among countries and among cities in the U.S. suggests that we could do better – and that public policy may have an important role to play. Models like the Social Genome are intended to assist in that endeavor, in part by allowing policymakers to bench- test competing initiatives based on the statistical evidence.

America's presumed exceptionalism is rooted in part on a belief that class-based distinctions are less important than in Western Europe. From this perspective, it is distressing to learn that American children do not have exceptional opportunities to get ahead – and that the consequences of gaps in children's initial circumstances might embed themselves in the social fabric over time, leading to even less social mobility in the future.

But there is also some cause for optimism. Programs that compensate at least to some degree for disadvantages earlier in life really can close opportunity gaps and increase rates of social mobility. Moreover, by most any reasonable reckoning, the return on the public investment is high.


Editor's note: This piece originally appeared in the Milken Institute Review.

Publication: Milken Institute Review
Image Source: Eric Audras
      
 
 




pro

Experts assess the nuclear Non-Proliferation Treaty, 50 years after it went into effect

March 5, 2020 marks the 50th anniversary of the entry into effect of the Treaty on the Non-Proliferation of Nuclear Weapons (NPT). Five decades on, is the treaty achieving what was originally envisioned? Where is it succeeding in curbing the spread of nuclear weapons, and where might it be falling short? Four Brookings experts on defense…

       




pro

Walk this Way:The Economic Promise of Walkable Places in Metropolitan Washington, D.C.

An economic analysis of a sample of neighborhoods in the Washington, D.C. metropolitan area using walkability measures finds that: More walkable places perform better economically. For neighborhoods within metropolitan Washington, as the number of environmental features that facilitate walkability and attract pedestrians increase, so do office, residential, and retail rents, retail revenues, and for-sale…

       




pro

Taxing capital income: Mark-to-market and other approaches

Given increased income and wealth inequality, much recent attention has been devoted to proposals to increase taxes on the wealthy (such as imposing a tax on accumulated wealth). Since capital income is highly skewed toward the ultra-wealthy, methods of increasing taxes on capital income provide alternative approaches for addressing inequality through the tax system. Marking…

       




pro

Mexico is a prop in President Trump’s political narrative

When it comes to his country’s relationship with Mexico, U.S. President Donald Trump has decided to take a position that is at once reckless and suicidal. Reckless, because he is single-handedly scuttling a bilateral relationship with a nation that is vital to the prosperity, security, and well-being of the U.S. Suicidal, because the punitive tariffs…

       




pro

The U.S. and China’s Great Leap Forward … For Climate Protection

It’s rare in international diplomacy today that dramatic agreements come entirely by surprise.  And that’s particularly the case in economic negotiations, where corporate, labor, and environmental organizations intensely monitor the actions of governments – creating a rugby scrum around the ball of the negotiation that seems to grind everything to incremental measures. That’s what makes…

       




pro

The Summit of the Americas and prospects for inter-American relations


Event Information

April 3, 2015
9:00 AM - 10:15 AM EDT

Saul/Zilkha Rooms
Brookings Institution
1775 Massachusetts Avenue NW
Washington, DC 20036

Register for the Event

On April 10 and 11, 2015, the Seventh Summit of the Americas will bring together the heads of state and government of every country in the Western Hemisphere for the first time. Recent efforts by the United States to reform immigration policy, re-establish diplomatic relations with Cuba, and reform our approach to drug policies at home and abroad have generated greater optimism about the future of inter-American relations. This Summit provides an opportunity to spark greater collaboration on development, social inclusion, democracy, education, and energy security.

However, this Summit of the Americas is also convening at a time when the hemisphere is characterized by competing visions for economic development, democracy and human rights, and regional cooperation through various institutions such as the Organization of American States, the Union of South American Nations, and the Community of Latin American and Caribbean States.

On Friday, April 3, the Latin America Initiative at Brookings hosted Assistant Secretary of State Roberta S. Jacobson for a discussion on the Seventh Summit of the Americas and what it portends for the future of hemispheric relations.

Join the conversation on Twitter using #VIISummit

Audio

Transcript

Event Materials

     
 
 




pro

Reconciling U.S. property claims in Cuba


As the United States and Cuba rebuild formal relations, certain challenging topics remain to be addressed. Among these are outstanding U.S. property claims in Cuba. In this report, Richard E. Feinberg argues that it is in both countries’ interests to tackle this thorny issue expeditiously, and that the trauma of property seizures in the twentieth century could be transformed into an economic opportunity now.

The report looks closely at the nearly 6,000 certified U.S. claims, disaggregating them by corporate and individual, large and small. To settle the U.S. claims, Feinberg suggests a hybrid formula, whereby smaller claimants receive financial compensation while larger corporate claimants can select an “opt-out” option whereby they pursue their claims directly with Cuban authorities, perhaps facilitated by an umbrella bilateral claims resolution committee. In this scenario, the larger corporate claimants (which account for nearly $1.7 billion of the $1.9 billion in total U.S. claims, excluding interest) could select from a menu of business development rights, including vouchers applicable to tax liabilities or equity investments, and preferred acquisition rights. Participating U.S. firms could also agree to inject additional capital and modern technology, to ensure benefits to the Cuban economy.

Though it is often argued that Cuba is too poor to pay some $2 billion of claims, the paper finds that Cuba can in fact manage payments if they are stretched out over a reasonable period of time and exclude interest. The paper also suggests a number of mechanisms whereby the Cuban government could secure funds to pay compensation, including revenues on normalization-related activities.

The Cuban government does not dispute the principle of compensation for properties nationalized in the public interest; the two governments agree on this. Cuba also asserts a set of counter-claim that allege damages from the embargo and other punitive actions against it. But a grand bargain with claims settlement as the centerpiece would require important changes in U.S. sanctions laws and regulations that restrict U.S. investments in Cuba. The United States could also offer to work with Cuba and other creditors to renegotiate Cuba’s outstanding official and commercial debts, taking into account Cuba’s capacity to pay, and allow Cuba to enter the international financial institutions.

Feinberg ultimately argues that both nations should make claims resolution the centerpiece of a grand bargain that would advance the resolution of a number of other remaining points of tension between the two nations. This paves the way for Cuba to embrace an ambitious-forward-looking development strategy and for real, notable progress in normalizing relations with the United States.

Downloads

Image Source: © Kevin Lamarque / Reuters
      
 
 




pro

Burkina Faso Protests Extending Presidential Term Limits


On Tuesday, October 28, 2014, tens of thousands of citizens of Burkina Faso gathered in its capital city, Ouagadougou, and its second biggest city, Bobo Dioulasso, to protest proposed changes to its constitution regarding term limits. A vote was planned for Thursday, on whether to extend the current limit of two terms to three. This vote is extremely controversial:  Current President Blaise Compaoré, who came to power in a coup in 1987, has ruled the country for 27 years. Allowing him to run for election in November 2015 could extend his reign for another five years. In Ouagadougou on Wednesday, citizens angry over the possibility that parliament might make it possible for Campaoré to stay in power indefinitely set fire to the parliament and forced legislators to postpone the vote that had been set for Thursday, October 30, 2014 to decide the constitutional issue.

A History of Autocracy in Burkina Faso

The West African country has been plagued by dictators, autocracies and coups in the past. At independence on August 5, 1960, Maurice Yaméogo, leader of the Voltaic Democratic Union (Union démocratique voltaïque), became the country’s first president. Shortly after assuming power, Yaméogo banned all political opposition, forcing mass riots and demonstrations that only came to an end after the military intervened in 1966. Lt. Col. Sangoulé Lamizana and a collection of military elites took control of the government and subsequently dissolved the National Assembly as well as suspended the constitution. Lamizana stayed in power until November 1980 when the military overthrew the government and installed Col. Saye Zerbo as the new president. Two years later, Col. Zerbo’s government was overthrown by Maj. Dr. Jean-Baptiste Ouédraogo and the Council of Popular Salvation (CSP—Conseil du salut du peuple). Although it promised to transition the country to civilian rule and provide a new constitution, the Ouédraogo regime banned all political organizations, including opposition parties. There soon arose a political struggle within the CSP. The radicals, led by Captain Thomas Sankara, eventually overthrew the government in August 1983, and Capt. Sankara emerged as the country’s new leader. In 1984, the Sankara government changed the country’s name from Upper Volta to Burkina Faso and introduced many institutional reforms that effectively aligned the country with Marxist ideals.

On October 15, 1987, Capt. Blaise Compaoré, a former colleague of Sankara’s, killed Sankara and several of his confidants in a successful coup d’état. In 1991, Campaoré was elected president in an election in which only 25 percent of the electorate participated because of a boycott movement organized and carried out by opposition parties. In 1998, he won reelection for another seven-year term. As president, Campaoré reversed all the progressive policies that Sankara had implemented.

President Blaise Compaoré’s Time in Power

In 2000, the country’s post-Cold War 1991 constitution was amended to impose a limit of two five-year consecutive terms on the presidency. However, Campaoré’s supporters argued that because he was in office when the amendments went into effect, they did not apply to him and, hence, he was qualified to run for re-election in 2005. Despite the fact that the opposition fielded several candidates, Campoaré won 80.35 percent of the votes cast in the 2005 presidential election. And, in the presidential elections held in November 2010, he captured 80.2 percent of votes.

Over more than a quarter century in power, Campaoré has used an unusual formula to achieve relative stability in Burkina Faso—authoritarianism mixed with traces of democracy. The complex governance system has relied primarily on Campaoré’s dominant and charismatic political power and has failed to build sustainable institutions—specifically, those capable of maintaining the rule of law and enhancing peaceful coexistence in his absence.

Constitutionally mandated presidential term limits strengthen the rule of law and provide a significant level of stability and predictability to the country’s governance institutions. In response to the efforts by Burkinabé members of parliament to change the constitution to enable Compaoré to secure another term in office, U.S. government officials have recently stated that “democratic institutions are strengthened when established rules are adhered to with consistency.” On his part, Campaoré has proclaimed that his main and immediate concern “is not to build a future for myself—but to see how the future of this country will take shape.” If this is indeed true, then he should exit gracefully from the Burkinabé political scene and henceforth serve as an elder statesman, providing his country’s new leadership with the advice and support that they need to deepen and institutionalize democracy, as well as enhance economic, social, political and human development.

Insisting, as President Campoaré has done, that the constitution be changed so that he can seek an additional term in power not only destroys the country’s fragile stability but also sends the wrong message to citizens about the rule of law—while citizens must be law-abiding, the president does not have to abide by the country’s settled law; if the law stands in the way of the president’s personal ambitions, he can simply change the law to provide him with the wherewithal to achieve those objectives. Such behavior from the country’s chief executive does not augur well for deepening the country’s democracy, an objective that is dear to many Burkinabé. The question to ask President Campoaré is: How do you want history to remember you? As a self-serving political opportunist who used his public position to accumulate personal power and wealth, at the expense of fellow citizens, or as a public servant who led and directed his country’s transformation into a peaceful, safe and productive society?

      
 
 




pro

African Union Commission elections and prospects for the future


The African Union (AU) will hold its 27th Heads of State Assembly in Kigali from July 17-18, 2016, as part of its ongoing annual meetings, during which time it will elect individuals to lead the AU Commission for the next four years. Given the fierce battle for the chairperson position in 2012; and  as the AU has increasingly been called upon to assume more responsibility for various issues that affect the continent—from the Ebola pandemic that ravaged West Africa in 2013-14 to civil wars in several countries, including Libya, Central African Republic, and South Sudan, both the AU Commission and its leadership have become very important and extremely prestigious actors. The upcoming elections are not symbolic: They are about choosing trusted and competent leaders to guide the continent in good times and bad.

Structure of the African Union

The African Union (AU) [1] came into being on July 9, 2002 and was established to replace the Organization of African Unity (OAU). The AU’s highest decisionmaking body is the Assembly of the African Union, which consists of all the heads of state and government of the member states of the AU. The chairperson of the assembly is the ceremonial head of the AU and is elected by the Assembly of Heads of State to serve a one-year term. This assembly is currently chaired by President Idriss Déby of Chad.

The AU’s secretariat is called the African Union Commission [2] and is based in Addis Ababa. The chairperson of the AU Commission is the chief executive officer, the AU’s legal representative, and the accounting officer of the commission. The chairperson is directly responsible to the AU’s Executive Council. The current chairperson of the AU Commission is Dr. Nkosazana Dlamini Zuma of South Africa and is assisted by a deputy chairperson, who currently is Erastus Mwencha of Kenya.

The likely nominees for chairperson

Dr. Zuma has decided not to seek a second term in office and, hence, this position is open for contest. The position of deputy chairperson will also become vacant, since Mwencha is not eligible to serve in the new commission.

Notably, the position of chairperson of the AU Commission does not only bring prestige and continental recognition to the person that is elected to serve but also to the country and region from which that person hails. Already, the Southern African Development Community (SADC), Dr. Zuma’s region, is arguing that it is entitled to another term since she has decided not to stand for a second. Other regions, such as eastern and central Africa, have already identified their nominees. It is also rumored that some regions have already initiated diplomatic efforts to gather votes for their preferred candidates.

In April 2016, SADC chose Botswana’s minister of foreign affairs, Dr. Pelonomi Venson-Moitoi, as its preferred candidate. Nevertheless, experts believe that even if South Africa flexes its muscles to support Venson-Moitoi’s candidacy (which it is most likely to do), it is not likely to succeed this time because Botswana has not always supported the AU on critical issues, such as the International Criminal Court, and hence, does not have the goodwill necessary to garner the support for its candidate among the various heads of state.

Venson-Moitoi is expected to face two other candidates—Dr. Specioza Naigaga Wandira Kazibwe of Uganda (representing east Africa) and Agapito Mba Mokuy of Equatorial Guinea (representing central Africa). Although Mokuy is relatively unknown, his candidacy could be buoyed by the argument that a Spanish-speaking national has never held the chairperson position, as well as the fact that, despite its relatively small size, Equatorial Guinea—and its president, Teodoro Obiang Nguema—has given significant assistance to the AU over the years. Obiang Nguema’s many financial and in-kind contributions to the AU could endear his country and its candidate to the other members of the AU.

In fact, during his long tenure as president of Equatorial Guinea, Obiang Nguema has shown significant interest in the AU, has attended all assemblies, and has made major contributions to the organization. In addition to the fact that Equatorial Guinea hosted AU summits in 2011 and 2014, Obiang Nguema served as AU chairperson in 2011. Thus, a Mokuy candidacy for the chairperson of the AU Commission could find favor among those who believe it would give voice to small and often marginalized countries, as well as members of the continent’s Spanish-speaking community. Finally, the opinion held by South Africa, one of the continent’s most important and influential countries, on several issues (from the political situation in Burundi to the International Criminal Court and its relations with Africa) appears closer to that of Equatorial Guinea’s than Botswana’s.

Of course, both Venson-Moitoi and Kazibwe are seasoned civil servants with international and administrative experience and have the potential to function as an effective chairperson. However, the need to give voice within the AU to the continent’s historically marginalized regions could push Mokuy’s candidacy to the top.

Nevertheless, supporters of a Mokuy candidacy may be worried that accusations of corruption and repression labeled on Equatorial Guinea by the international community could negatively affect how their candidate is perceived by voters.

Also important to voters is their relationship with former colonial powers. In fact, during the last election, one argument that helped defeat then-Chairperson Jean Ping was that both he and his (Gabonese) government were too pro-France. This issue may not be a factor in the 2016 elections, though: Equatorial Guinea, Uganda, and Botswana are not considered to be extremely close to their former colonizers.

Finally, gender and regional representation should be important considerations for the voters who will be called upon to choose a chairperson for the AU Commission. Both Venson-Moitoi and Kazibwe are women, and the election of either of them would continue to support diversity within African leadership. Then again, Mr. Mokuy’s election would enhance regional and small-state representation.

The fight to be commissioner of peace and security

Also open for contest are the portfolios of Peace and Security, Political Affairs, Infrastructure and Energy, Rural Economy and Agriculture, Human Resources, and Science and Technology. Many countries are vying for these positions on the commission in an effort to ensure that their status within the AU is not marginalized. For example, Nigeria and Algeria, both of which are major regional leaders, are competing to capture the position of commissioner of Peace and Security. Algeria is keen to keep this position: It has held this post over the last decade, and, if it loses this position, it would not have any representation on the next commission—significantly diminishing the country’s influence in the AU.

Nigeria’s decision to contest the position of commissioner of Peace and Security is based on the decision by the administration of President Muhammadu Buhari to give up the leadership of Political Affairs. Historically, Nigeria has been unwilling to compete openly against regional powers for leadership positions in the continent’s peace and security area. Buhari’s decision to contest the portfolio of Peace and Security is very risky, since a loss to Algeria and the other contesting countries will leave Nigeria without a position on the commission and would be quite humiliating to the president and his administration.

Struggling to maintain a regional, gender, and background balance

Since the AU came into being in 2002, there has been an unwritten rule that regional powers (e.g., Algeria, Kenya, Nigeria, South Africa) should not lead or occupy key positions in the AU’s major institutions. Thus, when Dr. Zuma was elected in 2012, South Africa was severely criticized, especially by some smaller African countries, for breaking that rule. The hope, especially of the non-regional leaders, is that the 2016 election will represent a return to the status quo ante since most of the candidates for the chairperson position hail from small- and medium-sized countries.

While professional skills and international experience are critical for an individual to serve on the commission, the AU is quite concerned about the geographical distribution of leadership positions, as well as the representation of women on the commission, as noted above. In fact, the commission’s statutes mandate that each region present two candidates (one female and the other male) for every portfolio. Article 6(3) of the commission’s statutes states that “[a]t least one Commissioner from each region shall be a woman.” Unfortunately, women currently make up only a very small proportion of those contesting positions in the next commission. Thus, participants must keep in mind the need to create a commission that reflects the continent’s diversity, especially in terms of gender and geography.

Individuals that have served in government and/or worked for an international organization dominate leadership positions in the commission. Unfortunately, individuals representing civil society organizations are poorly represented on the nominee lists; unsurprisingly, given the fact that the selection process is controlled by civil servants from states and regional organizations. Although this approach to the staffing of the commission guarantees the selection of skilled and experienced administrators, it could burden the commission with the types of bureaucratic problems that are common throughout the civil services of the African countries, notably, rigidity, tunnel vision, and the inability, or unwillingness to undertake bold and progressive initiatives.

No matter who wins, the African Union faces an uphill battle

The AU currently faces many challenges, some of which require urgent and immediate action and others, which can only be resolved through long-term planning. For example, the fight against terrorism and violent extremism, and securing the peace in South Sudan, Burundi, Libya, and other states and regions consumed by violent ethno-cultural conflict require urgent and immediate action from the AU. Issues requiring long-term planning by the AU include helping African countries improve their governance systems, strengthening the African Court of Justice and Human Rights, facilitating economic integration, effectively addressing issues of extreme poverty and inequality in the distribution of income and wealth, responding effectively and fully to pandemics, and working towards the equitable allocation of water, especially in urban areas.

Finally, there is the AU’s dependence on foreign aid for its financing. When Dr. Dlamini Zuma took over as chairperson of the AU Commission in 2012, she was quite surprised by the extent to which the AU depends on budget subventions from international donors and feared that such dependence could interfere with the organization’s operations. The AU budget for 2016 is $416,867,326, of which $169,833,340 (40 percent) is assessed on Member States and $247,033,986 (59 percent) is to be secured from international partners.  The main foreign donors are the United States, Canada, China, and the European Union.

Within Africa, South Africa, Angola, Nigeria, and Algeria are the best paying rich countries. Other relatively rich countries, Egypt, Libya, Sudan, and Cameroon, are struggling to pay. Libya’s civil war and its inability to form a permanent government is interfering with its ability to meet its financial obligations, even to its citizens. Nevertheless, it is hoped that South Africa, Nigeria, Angola, Egypt, and Libya, the continent’s richest countries, are expected to eventually meet as much as 60% of the AU’s budget and help reduce the organization’s continued dependence on international donors. While these major continental and international donors are not expected to have significant influence on the elections for leadership positions on the AU Commission, they are likely to remain a determining factor on the types of programs that the AU can undertake.

Dealing fully and effectively with the multifarious issues that plague the continent requires AU Commission leadership that is not only well-educated and skilled, but that has the foresight to help the continent develop into an effective competitor in the global market and a full participant in international affairs. In addition to helping the continent secure the peace and provide the enabling environment for economic growth and the creation of wealth, this crop of leaders should provide the continent with the leadership necessary to help states develop and adopt institutional arrangements and governing systems that guarantee the rule of law, promote the protection of human rights, and advance inclusive economic growth and development.


[1] The AU consists of all the countries on the continent and in the United Nations, except the Kingdom of Morocco, which left the AU after the latter recognized the Sahrawi Arab Democratic Republic (Western Sahara). Morocco claims that the Western Sahara is part of its territory.

[2] The AU Commission is made up of a number of commissioners who deal with various policy areas, including peace and security, political affairs, infrastructure and energy, social affairs, trade and industry, rural economy and agriculture, human resources, science and technology, and economic affairs. According to Article 3 of its Statutes, the Commission is empowered to “represent the Union and defend its interests under the guidance of and as mandated by the Assembly and Executive Council.”

      
 
 




pro

Managing Nuclear Proliferation in the Middle East

This paper appears as chapter 4 of Restoring the Balance: A Middle East Strategy for the Next President. See the book overview and executive summaries for information on other chapters. EXECUTIVE SUMMARY CURRENT U.S. EFFORTS to stop Iran’s nuclear program have failed. Fortunately, however, because of technical limits, Iran appears to be two to three years…

       




pro

The Beginning of a Turkish-Israeli Rapprochement?


Since May 2010’s Mavi Marmara incident, which resulted in the killing of nine Turkish activists from Israel Defense Forces’ fire, relations between Turkey and Israel have been suspended. Two major regional developments in 2012, the lingering Syrian crisis and Israel’s Operation Pillar of Defense in Gaza, have underscored the lack of a senior-level dialogue between Israel and Turkey. However, in the wake of the latest Gaza crisis, officials on both sides have confirmed press reports detailing recent bilateral contacts between senior Turkish and Israeli officials in Cairo and Geneva, possibly signaling a shift in the relationship.

Since 1948, Israeli-Turkish relations have been through periods of disagreement and tension, as well as periods of cooperation and understanding. Relations developed gradually over the years and eventually reached their peak in the 1990’s when the two countries forged a strategic partnership, supported and strengthened by the United States. During those years, the Turkish general staff and the Israeli defense establishment were the main proponents for an enhanced relationship between the two countries. Military cooperation and coordination with Israel fit the broader world view of the secularist Turkish defense establishment. Turkey’s military structure and posture was NATO and Mediterranean oriented, and within this framework Israel was naturally viewed as an ally. From the Israeli perspective, Israel’s defense establishment recognized Turkey’s geostrategic importance and the potential that existed for defense collaboration.

Positive relations between the two countries continued well into the first decade of the 21st century but began to slow down when Turkey experienced a new social transformation and political Islamists became the dominant political force in Turkey. The clash that ensued between the new Turkish leadership and the military elite eroded the military’s standing, coupled with a major shift in Turkish foreign policy, inevitably led to a souring in the relationship between Turkey and Israel. With the launch of Israel’s Operation Cast Lead in December 2008, relations began to seriously weaken, as Turkey expressed clear disapproval of Israel’s actions. Despite its efforts, the United States was not able to repair relations between the two countries. The Mavi Marmara incident in 2010 led to further decline of relations between the two.

Two and a half years have passed since the incident on board the Turkish passenger vessel, and relations between Turkey and Israel remain strained, with the two countries locked into their positions. Turkish Prime Minister Recep Tayyib Erdoğan insists that if Israel wishes to normalize relations, it must accept three conditions: issue a formal apology over the incident; compensate the families of the nine Turks (one of them an American citizen) killed on board; and lift the naval blockade of Gaza. Not surprisingly, Israeli Prime Minister Binyamin Netanyahu is reportedly not willing to meet the three Turkish demands.

In recent months, Israel has made several attempts, both directly and through third parties, to find a formula that will restore the dialogue between Jerusalem and Ankara, but to no avail. Erdoğan publicly rejected these Israeli diplomatic approaches, reiterating the need to address the three conditions before further talks can ensue. As a result, bilateral ties, excluding trade, are practically at a standstill, with low level (second secretary) diplomatic representation in respective embassies in both Ankara and Tel Aviv.

Over the past year and a half, the upheaval in the Arab world has occupied the top of the Turkish foreign policy agenda. Thus, the relationship with Israel has not been a priority for the Turks, pushing Israel to invest greater efforts in developing its ties with Turkey’s rivals and neighbors, including Greece, Cyprus, Bulgaria, and Romania. Moreover, Turkey, previously an Israeli vacation hotspot, has experienced a substantial decline in the number of Israeli tourists.

The Turkish-Israeli relationship was not a high priority on the U.S. administration’s foreign policy agenda in the months leading up to the U.S. presidential elections. While the United States did previously engage in efforts to bridge the gap between the two countries, recently, other issues, including the 9/11 attack on the U.S.’s mission in Benghazi, Libya, the Syria crisis, and Iran’s nuclear program, have consumed the attention of U.S. policy makers dealing with the Middle East.

Against this backdrop, Erdoğan’s willingness to allow his head of intelligence to meet the head of Mossad in Cairo, and his foreign ministry’s director general to meet with Israeli Senior Envoy Ciechanover in Geneva, may seem surprising, especially considering Erdoğan’s own harsh rhetoric against Israel during the initial phases of Operation Pillar of Defense. Turkish Foreign Minister Ahmet Davutoğlu explained that the meetings were aimed at finding an end to the Gaza crisis and that there would be no discussion of reconciliation so long as Israel did not address Turkey’s three previously stated conditions. Israeli officials confirmed that while the discussion in Cairo focused on Gaza, the meeting in Geneva went beyond the Gaza issue, and Israel’s envoy Ciechanover did in fact suggest possible options to address Turkey’s three stipulations.

What does all this mean?

Turkey’s recent moves can be attributed to a growing realization that it has hurt its interests and hampered its diplomatic efforts by not maintaining dialogue and open channels with Israel. This move has allowed the Muslim Brotherhood-led Egypt to take center stage and orchestrate, together with the United States, the ceasefire between Israel and Hamas. Turkey, which takes pride in facilitating diplomacy in the Middle East (as demonstrated in the 2008 Turkish-brokered Syrian-Israeli proximity peace talks), was marginalized in the latest round of negotiations on Gaza simply for having damaged its relationship with Israel.

Furthermore, as Turkey’s involvement in the Syrian crisis deepens, and as it prepares to deploy Patriot missiles on the Turkish-Syrian border, Turkey most certainly will aspire to improve intelligence cooperation with Israel. With regards to Syria, there is very little disagreement, if any, between Turkey and Israel, and cooperating on this issue could prove to be very useful and beneficial for both countries.

The possible cooperation on Syria does not mean that Turkey will drop its insistence on Israel meeting the three conditions, but it may indicate a greater inclination to show flexibility with regard to the actual wording and terms of those conditions.

Israel may be willing to be more forthcoming toward Turkey in respect to the three conditions, so long as it receives assurances that Turkey will not just pocket an Israeli apology and compensation and revert to its anti-Israel mode. Israel has its own concerns, and feels more isolated than ever before in a volatile Middle East region. Its need to rely solely on Egyptian President Mohamed Morsi’s mediating efforts last week certainly left Israeli decision makers uneasy. Israel will likely continue to reach out to Turkey in the coming weeks, but a final decision, which may include compromises, will possibly wait until after the Israeli elections in January 2013.

One must not lose sight of the fact that the Turkey-Israel relationship has deteriorated to a low point not only because of disagreement on political issues but also because of the clash of personalities between leaders on both sides. Officials on both sides will face tough decisions in the coming year, and will likely have to go against their own constituencies and popular public sentiments in order to repair relations.

The distrust between both countries is deep and the level of animosity at the leadership level is high. While it is encouraging that they are finally communicating with one another, undoubtedly progress will require a third party presence and involvement. In this respect, the Obama administration has an important role to play. Unquestionably, a rapprochement between Turkey and Israel will serve U.S. global and regional strategic interests. The strong rapport between U.S. President Barak Obama and Erdoğan and what seems in the aftermath of the Gaza crisis as more frequent consultations between Obama and Netanyahu, can contribute to a U.S.-brokered deal between the two sides. If successful, this deal will address not only the Mavi Marmara incident and Turkish demands, but it will also lay out guidelines and a “code of conduct” for interaction between the two sides in times of war and peace and sponsor a Turkish-Israeli dialogue on regional developments and issues of mutual concern. After a long disconnect between the parties, recent interactions between the two regarding the latest Gaza crisis signal that both sides are predisposed to take another look at seriously engaging with each other again, and the United States can help make this a reality.

Perhaps this could be one of Secretary of State Hillary Clinton’s last missions before leaving office.

Authors

Image Source: © Osman Orsal / Reuters
      
 
 




pro

Turbulence in Turkey–Israel Relations Raises Doubts Over Reconciliation Process


Seven months have passed since Israel officially apologized to Turkey for the Mavi Marmara incident of May 2010, in which nine Turks were killed by Israeli fire. What seemed, at the time, to be a diplomatic breakthrough, capable of setting into motion a reconciliation process between America’s two greatest allies in the region, has been frustrated by a series of spiteful interactions.

The Turkish-Israeli alliance of the 1990s and first decade of the 2000s was viewed by senior U.S. officials as an anchor of stability in a changing region. The relationship between Ankara and Jerusalem served vital U.S. interests in the Eastern Mediterranean and the Middle East, and so it was therefore a U.S. priority to restore dialogue between the two former allies-turned-rivals. The Obama administration, throughout both terms, has made a continuous effort to rebuild the relationship and was ultimately successful in setting the stage for the Israeli apology and the Turkish acceptance of that apology. The U.S. was not the only party that stood to gain from reconciliation; both Turkey and Israel have many incentives for normalizing relations. For Turkey, the reestablishment of a dialogue with Israel has four main potential benefits: It would allow for greater involvement in the Israeli-Palestinian peace negotiations, it would provide greater opportunity for information sharing on the developments of the Syrian civil war allowing Turkey to have a more comprehensive perspective, it would also provide more economic opportunities for Turkey especially with regard to cooperation in the field of natural gas (following Israel's High Court of Justice recent ruling that paves the way toward exports of natural gas), and finally it would remove an irritant from Turkey's relations with the United States. In turn, Israel would benefit from the reestablishment of dialogue in three major ways: the rebuilding of relations between senior Turkish and Israeli officials would facilitate intelligence sharing and help to gain a more complete picture of the Syrian crisis, Israel would have the opportunity to contain delegitimization efforts in the Muslim and Arab worlds, and Israel may be able to rejoin NATO related activities and maneuvers.

Despite these enticements, in recent weeks a series of news stories and revelations have put the Turkish-Israeli relationship, yet again, in the international spotlight, raising doubts whether reconciliation between the two countries is at all possible at this time. As the Obama administration struggles to deal with the fallout of allegations that the NSA has tapped the office and cellular phones of Western European leaders and as it focuses on more pressing issues in the Middle East, namely the P5+1 negotiations with Iran, the Syrian crisis, Egypt and negotiations between Israel and the Palestinians, it finds itself with little time to chaperone the Turkish-Israeli reconciliation process. Nevertheless, despite tensions, direct talks are reportedly being held between senior Turkish and Israeli officials in an effort to reach a compensation agreement in the near future.

The Israeli apology and Turkish acceptance, orchestrated by Barack Obama during his trip to the region in March 2013, was an essential first step in a long process of reconciliation, aimed at normalizing relations between the two countries after a four year hiatus in their relationship. The next step was an agreement between the two sides in which Israel was to pay compensation to the families of the victims of the Mavi Marmara. Several rounds of talks between senior Turkish and Israeli representatives were reportedly held during the spring of 2013 in Ankara, Jerusalem and Washington, but to no avail. Disagreements over the amount of compensation to be paid by Israel were reported, but later, in July, Turkish Deputy Prime Minister Arinc clarified that money was not the issue. He stated that the problem lay in Israel’s refusal to acknowledge that the payment was a result of its “wrongful act.” Arinc added that another point of contention was Turkey's demand that Israel cooperate in improving the living conditions of the Palestinians in the Occupied Territories. Arinc emphasized that only when these two conditions were met could the countries move forward to discuss the specific amount of compensation.

The shadow cast over negotiations by Arinc’s comments was darkened by a string of comments made by Turkish Prime Minister Erdogan against Israel. First, he blamed the “interests lobby” – perhaps a reference to the so-called “Israel Lobby” -- for the large protests that took place against him and his government in Istanbul’s Taksim square and across Turkey in June. Then, in August, Erdogan accused Israel of backing the military coup in Egypt, citing comments made in 2011 by the French Jewish philosopher Bernard Henri-Levy, as proof of a long standing Israeli-Jewish plot to deny the Muslim Brotherhood power in Egypt. This drew sharp Israeli criticism, notably from former Israeli Foreign Minister, Avigdor Lieberman, who compared Erdogan to the Nazi Minister of Propaganda, Joseph Goebbels.

Despite these setbacks, bilateral trade between Turkey and Israel has expanded since the official apology and the number of Israeli tourists returning to visit Turkey has risen dramatically. Yet it is clear that with such harsh rhetoric it will be difficult to effectively advance a reconciliation process. Among American, Turkish and Israeli experts, the prevailing view is that Erdogan and the AKP government, mainly due to domestic political considerations, are not interested in normalizing relations with Israel, and that the only reason Erdogan accepted Israeli Prime Minister Netanyahu’s apology was to gain favor with U.S. President Obama.

At the end of August, as the plan for a U.S. military strike in Syria gained momentum, relative calm prevailed in the relations between Ankara and Jerusalem, both focusing on preparations and plans to address the fallout of such an attack. Yet, just when it seemed that tensions were reducing, and Turkish President Gul stated that negotiations "are getting on track," in a September interview with the Washington Post, a series of news stories and revelations injected a poisonous dimension to the already-strained ties.

In early October another round of Turkish-Israeli verbal attacks and counter-attacks was sparked by a Wall Street Journal profile of the Turkish Head of Intelligence, Hakan Fidan, which included a quote from an anonymous Israeli official stating, "It is clear he (Fidan) is not an enemy of Iran." Shortly after came the revelation by David Ignatius in the Washington Post that quoted reliable sources that pointed to Fidan as allegedly passing the names of 10 Iranians working for the Israeli Mossad on to the Iranian intelligence in early 2012. These ten people were later arrested by the Iranian authorities. Senior Turkish officials blamed Israel for leaking the story to Ignatius and the Turkish daily, Hurriyet, reported that Fidan was considering severing ties between Turkish and Israeli intelligence agencies. Reactions in Turkey and Israel to the Ignatius story were harsh and emotional. Turkish officials denied the report while Israeli officials refrained from any public comments. The Friday edition of Yediot's front page headline read, “Turkish Betrayal,” and former Foreign Minister Lieberman voiced his opposition to the apology made in March; he expressed his opinion that it weakened Israel’s stance and image in the region, and he attacked Erdogan for not being interested in a rapprochement.

In recent days Prime Minister Erdogan struck a more conciliatory tone, saying that if Israel is denying involvement in the leak then Turkey must accept it. Israeli media outlets reported over the weekend that Israeli and Turkish negotiators are again trying to reach a compensation agreement. Israeli experts, quoted in these reports, view November 6 as a possible target date to end negotiations over this agreement. The logic behind this being that former Israeli Foreign Minister Lieberman’s verdict is expected that day. If acquitted of corruption charges Mr. Lieberman will return to the Foreign Minister’s job and will likely try and block any attempt to reach an agreement. Turkish experts however assess that Turkey is simply not ready to move forward at this time due to domestic political constraints, as Prime Minister Erdogan and the AKP are bracing for Presidential and local elections in 2014.

Notwithstanding, the next few weeks will be crucial in determining whether Turkey and Israel can move forward and finally put the Marmara incident behind them. Turkey and Israel both have separate disagreements with the U.S. - Turkey over Syria, Egypt and the Turkish decision to build a missile defense system with a Chinese firm under U.S. sanctions; Israel over the Iran nuclear issue. However, the lingering Syrian crisis and reported progress on the Israeli-Palestinian track, in addition to economic considerations such as trade, tourism and above all potential cooperation on natural gas may entice both sides to proceed. Undoubtedly, a final deal will require strong U.S. support.

Authors

Image Source: © Osman Orsal / Reuters
      
 
 




pro

Despite Gaza Conflict, Turkey and Israel Would Benefit from Rapprochement


The recent outbreak of hostilities between Israel and Hamas is a serious setback to ongoing Turkish-Israeli normalization efforts. Israel launched Operation Protective Edge, its third operation against Hamas since leaving Gaza in 2005, in response to rockets and missiles fired by Hamas from Gaza into Israel. As in Israel’s two previous Gaza campaigns, Operation Cast Lead (2008-09) and Operation Pillar of Defense (2012), Turkey quickly condemned Israel’s actions, yet offered to mediate, together with Qatar, between Israel and Hamas.

After Turkish Prime Minister Recep Tayyip Erdogan, in the midst of his presidential campaign, equated Israeli policy towards Gaza to a “systematic genocide” and accused Israel of surpassing “Hitler in barbarism,” Israel accepted an Egyptian cease-fire proposal. Israeli Foreign Minister Avigdor Lieberman accused Turkey and Qatar of “sabotaging the cease-fire proposal,” and Israeli Prime Minister Benjamin Netanyahu complained to U.S. Secretary of State John Kerry about Erdogan’s statements.

Turkish leaders’ harsh rhetoric sparked violent demonstrations in front of Israel’s embassy in Ankara and its consulate in Istanbul, lead the Israeli government to evacuate diplomats’ families, and issue a travel warning advising against travel to Turkey, which prompted numerous cancellations of tourist travel. On Sunday, Netanyahu refrained from declaring Turkish-Israeli reconciliation dead, but accused Erdogan of anti-Semitism more aligned with Tehran then the West.

These heightened Israeli-Turkish tensions come just as the two countries were negotiating a compensation deal for families of victims of the May 31, 2010 Mavi Marmara incident. The deal was intended to facilitate a long-awaited normalization between the two countries, more than a year after Israel’s official apology. The draft stipulated an estimated $21 million in Israeli compensation, the reinstatement of each country’s ambassador, and the reestablishment of a senior-level bilateral dialogue. However, a series of issues has prevented the deal’s finalization, including: Turkish domestic political considerations about the timing (related to March 2014 municipal elections and August 2014 presidential elections) and Israeli demands for Turkish commitments to block future lawsuits related to the Marmara incident.

With the ongoing Gaza conflict, prospects for normalization have again faded at least in the short term, and policymakers on both sides seem to have accepted a limited relationship. Erdogan even declared publicly that as long as he’s in power, there is no chance “to have any positive engagement with Israel”, dismissing any prospect for normalization. Israeli-Turkish animosity runs deep, not only among leaders, but at the grassroots level as well. While it may be difficult to look beyond the short term, a focus on the broader regional picture suggests four reasons why the two countries would benefit from restoring ties.

  • First, they share strategic interests. Turkey and Israel see eye to eye on many issues: preventing a nuclear Iran; concerns over spillover from the Syrian civil war; and finally, the rise of the Islamic State of Iraq and the Levant (ISIS/ISIL) and security and stability in Iraq. A resumed dialogue and renewed intelligence sharing can pave the way for more concrete cooperation between Turkey and Israel on all these regional issues, with development of a joint approach toward Syria topping the agenda.
  • Second, regional environment may be beyond their control, the bilateral relationship is not. Normalization can eliminate one factor of instability in an unstable region.
  • Third, Washington sees greater cooperation and cohesiveness in the U.S.-Turkey-Israel triangle as essential. President Obama has sought to restore a dialogue between Ankara and Jerusalem, including efforts to “extract” an Israeli apology and Turkish acceptance. Senior U.S. officials remain active in trying to improve the Turkish-Israeli relationship.
  • Fourth, normalization may convey benefits in the economic sphere, with possible cooperation on natural gas, tourism, and enhanced trade. Gas in particular is viewed as a possible game-changer. In 2013, bilateral trade first crossed the $5 billion mark, and data from the first six months of 2014 indicates a continued rise. A political thaw can help accelerate these joint business opportunities. 

Nevertheless, at this stage it is clear that serious U.S. involvement is required for Turkish-Israeli rapprochement to succeed, even in a limited fashion. At present, there are far greater challenges for U.S. foreign policy in the region. The question now is whether the relationship between two of America’s closest regional allies reflects a new “normal,” or whether the leaders of both countries – and the U.S. – can also muster the political will to reconnect the US-Turkey-Israel triangle along more productive lines.

Check back to Brookings.edu for Dan Arbell’s upcoming analysis paper: The U.S.-Turkey-Israel Triangle.

Authors

Image Source: © Osman Orsal / Reuters
      
 
 




pro

Should "Progressives" Boycott Whole Foods Over CEO's Statements on Health Care?

I am constantly amazed at the level of political discourse in the US. So a debate about health care degenerates into scares about "death panels" and boycotts of Whole Foods because their CEO is against it. It is all a bit much, and a complete mystery




pro

High Levels Of BPA Found In Cash Register Receipts, What You Can Do To Protect Yourself

Image Source: red5standingby Environmental Working Group (EWG), a nonprofit research organization based in Washington, DC, has discovered that many cash register receipts contain levels of Bisphenol-A (BPA) hundreds of times higher than those found in




pro

How an 'Untouchable Day' can boost your productivity

Where distractions are weeded out, focus can take root.




pro

Great protected bike lanes are popping up in Washington DC (video)

If we can put a center-running bike lane down the center of Pennsylvania avenue, we should be able to do it almost everywhere!




pro

Washington Metro closure is a symptom of a much bigger problem

All over North America we are letting our infrastructure rot and short-circuit.




pro

Forget bike lanes, we need Protected Mobility Lanes

The number of people using alternative mobility devices is exploding, and they will be demanding safe routes.




pro

Heated glass: Could this be the least sustainable building product ever invented?

You want giant windows but don't like drafts? Plug in your windows and turn them into toasters.




pro

Utensilmate is a great candidate for the Wrongest Product Award

I can't decide if this is just what I always needed or the worst product ever put on Kickstarter.




pro

United Nations Environment Programme announces the 2014 theme of World Environment Day

Vote today for your favorite slogan!




pro

Video showdown: Vote for the best in the United Nations Environment Programme’s competition

Send one of these video bloggers to cover World Environment Day.




pro

World Environment Day highlights Barbados’ sustainability programs

The host country of the United Nations World Environment day is working to protect its natural resources and adapt to climate change.




pro

World Environment Day 2015 to promote sustainable lifestyles

The UN Environment Program takes aim at unsustainable consumption in 2015.




pro

From Wildlife Photography to Conservation Projects and Beyond, a Look at 2012 According to Jaymi

Looking back on this year, so much happened! I wanted to take a moment to go look back on the articles I had the most fun writing, the issues I had the most fun covering, and the adventures I had the most fun experiencing. Enjoy this look back!




pro

Wineries For Climate Protection – the Manifesto!

Here's the manifesto by the Spanish wine industry to fight climate change by making wineries more eco-friendly. Vines are very sensitive to climate change and so their environment, landscape, culture and tradition need protecting.




pro

Deadly Floods in Thailand Are A Symptom of a Larger Problem

Since July, floods have ravaged Thailand, causing $3 billion in damage and killing nearly 300 people. But as the waters approach the capital city, Prime Minister Yingluck Shinawatra says he is confident




pro

Multi-layered urban housing prototype packs in plenty of great small space ideas

Using a series of overlapping mezzanines and spaces, this accessible, urban housing prototype explores the possibilities of living small but comfortably in the city.




pro

Are electric cars part of the climate solution or are they actually part of the problem?

If we are really going to make a dent in emissions we have to take real estate away from people who drive and redistribute it to people who walk and bike.




pro

Money can't fix circadian rhythm problems. Sunlight and freedom can.

Circadian rhythm lighting products won't fix body clock problems.




pro

Christmas Trees Given Jellyfish Genes Could Produce Their Own Light

The only downside, of course, is that your self-lit holiday centerpiece actually would be a Frankenstein tree.




pro

Could Cities Benefit from Small-Scale, Local "Urban Acupuncture" Projects Like This? (Photos)

Woven from bamboo, this inviting structure transforms an empty lot in busy Taipei into a haven where neighborhood residents can relax and gather over a fire.




pro

Air pollution is hurting human procreation

It distorts sperm, makes it harder to get pregnant, and can lead to premature births and low birth weights -- yet another reason why cars and humans don't mix well.