or

Why Bridgegate proves we need fewer hacks, machines, and back room deals, not more


I had been mulling a rebuttal to my colleague and friend Jon Rauch’s interesting—but wrong—new Brookings paper praising the role of “hacks, machines, big money, and back room deals” in democracy. I thought the indictments of Chris Christie’s associates last week provided a perfect example of the dangers of all of that, and so of why Jon was incorrect. But in yesterday’s L.A. Times, he beat me to it, himself defending the political morality (if not the efficacy) of their actions, and in the process delivering a knockout blow to his own position.

Bridgegate is a perfect example of why we need fewer "hacks, machines, big money, and back room deals" in our politics, not more. There is no justification whatsoever for government officials abusing their powers, stopping emergency vehicles and risking lives, making kids late for school and parents late for their jobs to retaliate against a mayor who withholds an election endorsement. We vote in our democracy to make government work, not break. We expect that officials will serve the public, not their personal interests. This conduct weakens our democracy, not strengthens it.

It is also incorrect that, as Jon suggests, reformers and transparency advocates are, in part, to blame for the gridlock that sometimes afflicts our American government at every level. As my co-authors and I demonstrated at some length in our recent Brookings paper, “Why Critics of Transparency Are Wrong,” and in our follow-up Op-Ed in the Washington Post, reform and transparency efforts are no more responsible for the current dysfunction in our democracy than they were for the gridlock in Fort Lee. Indeed, in both cases, “hacks, machines, big money, and back room deals” are a major cause of the dysfunction. The vicious cycle of special interests, campaign contributions and secrecy too often freeze our system into stasis, both on a grand scale, when special interests block needed legislation, and on a petty scale, as in Fort Lee. The power of megadonors has, for example, made dysfunction within the House Republican Caucus worse, not better.

Others will undoubtedly address Jon’s new paper at length. But one other point is worth noting now. As in foreign policy discussions, I don’t think Jon’s position merits the mantle of political “realism,” as if those who want democracy to be more democratic and less corrupt are fluffy-headed dreamers. It is the reformers who are the true realists. My co-authors and I in our paper stressed the importance of striking realistic, hard-headed balances, e.g. in discussing our non-absolutist approach to transparency; alas, Jon gives that the back of his hand, acknowledging our approach but discarding the substance to criticize our rhetoric as “radiat[ing] uncompromising moralism.” As Bridgegate shows, the reform movement’s “moralism" correctly recognizes the corrupting nature of power, and accordingly advocates reasonable checks and balances. That is what I call realism. So I will race Jon to the trademark office for who really deserves the title of realist!

Authors

Image Source: © Andrew Kelly / Reuters
      




or

Q & A with Ambassador Norman Eisen


Editor's Note: In September of this year Visiting Fellow Norman Eisen was featured in the Council on Government Ethics Law (COGEL) members-only magazine, The Guardian. An abbreviated version of his interview is featured below.

Interview conducted by Wesley Bizzell, Assistant General Counsel, Altria Client Services LLC.

Recently, you addressed the Italian Parliament to discuss ethics in government, as that legislative body considers adopting its own code of ethical conduct. In that speech, you noted you believe there are four key concepts at the center of Federal U.S. ethics laws. What are those four concepts and why they are important?

Firstly, I’d like to note the importance of focusing on four concepts. The House of Representatives Ethics manual is 456 pages long; too long to be of any real use in creating an ethics system. Instead, these four principles serve as a foundation upon which different governments can build their own sets of rules based on their own unique needs.

I focused on just four to make a point about priorities. The first is “conflicts”—that is, problems that arise when an individual’s personal interests and parliamentary duties may be at odds with one another. The second is “gifts”. Even if there isn’t an explicit quid-pro-quo style agreement involved, when a political figure accepts a gift from someone with a demonstrated interest in government decision-making, the suspicion of misconduct will always be there. “Revolving door” is the third core concept. When individuals rotate from the private sector to the public sector over and over again, they are naturally going to form relationships that tempt them toward unethical behavior. Finally, “use official resources.” Officials must be careful to use official resources only for official purposes, being particularly careful not to conduct any campaign activity on the taxpayer’s dime. The goal with these four priorities is not only to keep people from behaving unethically, but also to make sure it doesn’t seem like anyone is doing anything unethical either.

In that speech, you said that focusing on these four areas keeps you from losing the forest for the trees when working with ethics codes. Can you elaborate on that?

There’s always a danger for members of the executive branch, because the system of rules and regulations that governs ethical behavior is itself so complex. When it’s imbedded in equally complicated and overlapping sets of statute you risk creating rules so specific that they’re practically useless. The same is true in the legislative branch and I dare say in the federal judicial branch, as well as at the state and local levels. You’re always on the edge of being lost in the minutiae.

In fact, you can often make wrong decisions if you focus in too much on the specifics, because you lose sight of the larger picture that guides the rules. There are always options in ethical dilemmas, and the big picture needs to be kept in focus.

While at the White House serving as Special Counsel to the President for Ethics and Government Reform you oversaw numerous significant changes in the area of open government—including helping craft and implement President Obama’s Open Government Directive; publishing White House visitor logs on the internet; and generally improving the Freedom of Information Act (FOIA) process. What change in the area of open government are you most proud?

I was struck when we began the interview by the list of topics—campaign finance, lobbying, ethics, elections, and FOIA issues—because all of those were part of my portfolio as Special Counsel to the President for Ethics and Government Reform during the first two years of the Obama administration. I would have to say that I’m most proud of my role in the President’s decision to put all of the White House visitor records on the internet.

Remember, in previous administrations, Democratic and Republican alike, plaintiffs had to litigate for years just to get a handful of visitor records. To have all of the visitor records on the internet, categorized into various types, opens access to the White House to an unprecedented degree. There are now over four-and-a-half million visitor records available on the White House website, with more added every month. I think that that is remarkable.

Truthfully, I was torn between that accomplishment and a second one, which is that the President and his staff in the White House have had the longest run in presidential history (knock on wood) without a major ethics scandal or a grand jury investigation, indictment, or conviction. I was tempted to list that second fact as the accomplishment of which I was most proud. But it occurred to me that the death of White House scandal is actually a function of the exceptional level of transparency that the visitor records represent. Transparency helps ensure people don’t have meetings they shouldn’t be having, which keeps them out of trouble. So I’ll offer that second accomplishment as a part of the first one.

In your view, what was the most significant lobbying and ethics reform during your tenure at the White House?

No doubt about it: reversing the revolving door. Craig Holman of Public Citizen, who studies these issues, says we were the first in the world to create a reverse revolving door. I think it is absolutely critical to slow the revolving door in both directions—both coming out of government and going in.

I should also note that the comprehensive nature of the ethics system we put into place in the Obama administration bears a responsibility for the good results. The first rule, of course, of any ethics system is “tone at the top.” The president exemplifies that. He has the highest standards of ethics himself, and as a result everyone around him feels he will be personally let down if they don’t embrace the ethics system. Good results flow from that. Looking back, we can identify certain aspects that have more and less successful, but it’s important to recognize that the positive results are owed to the gestalt. Our transparency and ethics system was one of the most through and transparent that I’ve seen in any government, and the result speak for themselves.

Authors

Image Source: © Petr Josek Snr / Reuters
      




or

More Czech governance leaders visit Brookings


I had the pleasure earlier this month of welcoming my friend, Czech Republic Foreign Minister Lubomir Zaoralek, here to Brookings for a discussion of critical issues confronting the Europe-U.S. alliance. Foreign Minister Zaoralek was appointed to his current position in January 2014 after serving as a leading figure in the Czech Parliament for many years. He was accompanied by a distinguished delegation that included Dr. Petr Drulak of the Foreign Ministry, and Czech Ambassador Petr Gandalovic. I was fortunate enough to be joined in the discussion by colleagues from Brookings including Fiona Hill, Shadi Hamid, Steve Pifer, and others, as well as representatives of other D.C. think tanks. Our discussion spanned the globe, from how to respond to the Syrian conflict, to addressing Russia’s conduct in Ukraine, to the thaw in U.S.-Cuba relations, to dealing with the refugee crisis in Europe. The conversation was so fascinating that the sixty minutes we had allotted flew by and we ended up talking for two hours—and we still just scratched the surface.

Amb. Eisen and FM Zaoralek, October 2, 2015

Yesterday, we had a visit from Czech State Secretary Tomas Prouza, accompanied by Ambassador Martin Povejsil, the Czech Permanent Envoy to the EU. We also talked about world affairs. In this case, that included perhaps the most important governance matter now confronting the U.S.: the exceptionally entertaining (if not enlightening) presidential primary season. I expressed my opinion that Vice President Biden would not enter the race, only to have him prove me right in his Rose Garden remarks a few hours later. If only all my predictions came true (and as quickly). We at Brookings benefited greatly from the insights of both of these October delegations, and we look forward to welcoming many more from every part of the Czech political spectrum in the months ahead.

Prouza, Eisen, Povejsil, October 21, 2015

Authors

Image Source: © Gary Hershorn / Reuters
       




or

ReFormers Caucus kicks off its fight for meaningful campaign finance reform


I was honored today to speak at the kick off meeting of the new ReFormers Caucus. This group of over 100 former members of the U.S. Senate, the House, and governors of both parties, has come together to fight for meaningful campaign finance reform. In the bipartisan spirit of the caucus, I shared speaking duties with Professor Richard Painter, who was the Bush administration ethics czar and my predecessor before I had a similar role in the Obama White House. 

As I told the distinguished audience of ReFormers (get the pun?) gathered over lunch on Capitol Hill, I wish they had existed when in my Obama administration role I was working for the passage of the Disclose Act. That bill would have brought true transparency to the post-Citizens United campaign finance system, yet it failed by just one vote in Congress.  But it is not too late for Americans, working together, to secure enhanced transparency and other campaign finance changes that are desperately needed.  Momentum is building, with increasing levels of public outrage, as reflected in state and local referenda passing in Maine, Seattle and San Francisco just this week, and much more to come at the federal, state and local level.

Authors

       




or

More solutions from the campaign finance summit


We have received many emails and calls in response to our blog last week about our campaign finance reform “Solutions Summit," so we thought we would share some pictures and quotes from the event. Also, Issue One’s Nick Penniman and I just co-authored an op-ed highlighting the themes of the event, which you can find here.

Ann Ravel, Commissioner of the Federal Election Commission and the outgoing Chairwoman kicked us off as our luncheon speaker. She noted that, “campaign finance issues [will] only be addressed when there is a scandal. The truth is, that campaign finance today is a scandal.”

    

(L-R, Ann Ravel, Trevor Potter, Peter Schweizer, Timothy Roemer)

Commenting on Ann’s remarks from a conservative perspective, Peter Schweizer, the President of the Government Accountability Institute, noted that, “increasingly today the problem is more one of extortion, that the challenge not so much from businesses that are trying to influence politicians, although that certainly happens, but that businesses feel and are targeted by politicians in the search for cash.” That’s Trevor Potter, who introduced Ann, to Peter’s left.

Kicking off the first panel, a deep dive into the elements of the campaign finance crisis, was Tim Roemer, former Ambassador to India (2009-2011), Member of the U.S. House of Representatives, (D-IN, 1991-2003) Member of the 9/11 Commission and Senior Strategic Advisor to Issue One. He explained that “This is not a red state problem. It’s not a blue state problem. Across the heartland, across America, the Left, the Right, the Democrats, the Republicans, Independents, we all need to work together to fix this.”

(L-R, Fred Wertheimer, John Bonifaz, Dan Wolf, Roger Katz, Allen Loughry, Cheri Beasley, Norman Eisen)

Our second panel addressed solutions at the federal and state level.  Here, Fred Wertheimer, the founder and President of Democracy 21 is saying that, “We are going to have major scandals again and we are going to have opportunities for major reforms. With this corrupt campaign finance system it is only a matter of time before the scandals really break out. The American people are clearly ready for a change. The largest national reform movement in decades now exists and it’s growing rapidly.”

Our third and final panel explained why the time for reform is now. John Sarbanes, Member of the U.S. House of Representatives (D-MD) argued that fixes are in political reach. He explains, “If we can build on the way people feel about [what] they’re passionate on and lead them that way to this need for reform, then we’re going to build the kind of broad, deep coalition that will achieve success ultimately.”

 

(L-R in each photo, John Sarbanes, Claudine Schneider, Zephyr Teachout)

Reinforcing John’s remarks, Claudine Schneider, Member of the U.S. House of Representatives (R-RI, 1981-1991) pointed out that “we need to keep pounding the media with letters to the editor, with editorial press conferences, with broad spectrum of media strategies where we can get the attention of the masses. Because once the masses rise up, I believe that’s when were really going to get the change, from the bottom up and the top down.”

Grace Abiera contributed to this post.

Authors

       




or

Five reasons for (cautious) optimism about the EU’s future


The European Union (EU) is confronting a series of potentially existential threats, including the refugee crisis, ISIS terror, Russian adventurism, and Brexit (the potential exit of the U.K. from the EU).  I hosted Czech Prime Minister Bohuslav Sobotka at Brookings to get his fundamentally (but carefully) optimistic take on how he and his fellow EU leaders can meet those challenges. Here are five reasons for optimism that emerged from our conversation: 

  1. Take the Fight to Daesh.  The PM made clear Europe’s determination to take on the terror and refugee issues at their source in Iraq, Syria, and Libya.  Just this week, the Czech Republic upped its commitment to the international coalition, announcing that it will send a team to train Iraqis using U.S. made L-159 fighter jets (also sold to Iraq by Prague).  With transatlantic leadership, these efforts are starting to bear fruit in the decay of ISIS.
  2. Never Let a Good Crisis Go to Waste. As part of addressing today’s refugee crisis, Europe is exploring multi-lateral efforts to construct a common European border service, integrate refugee populations, and promote internal security.  The process is painful, but filling these gaps will make the European Union stronger.
  3. Stand Strong With Ukraine.  Some predicted that European unity against Putin’s expansionism would not hold.  Instead, the EU and the United States have maintained their resolve in enacting sanctions.  That has strengthened the EU, but as the PM pointed out, now Ukraine and its supporters must make sure that state moves towards good governance and functionality. 
  4. Taking the Exit Out of Brexit.  The PM predicted that the U.K. would not exit the EU.  When I pressed him on why, he acknowledged that there were elements of wishing and hoping in that forecast, and that the vote comes at a tough moment.  But I share the PM’s hopes—the U.K. is not one to leave friends when times get tough.
  5. Never Forget to Remember.  The PM and I spent a lot of time discussing the ups and downs of Central Europe’s experiment with democracy over the past century.  He and his Czech colleagues—of all mainstream political parties—are acutely aware of that history, and that too gives me hope that it will not be repeated.

Immense challenges can destabilize and divide—but they also present opportunities for new collaboration and cohesion. If addressed in partnership, Europe’s current trials can ultimately strengthen the ties that bind the EU together.  

Watch the full discussion here.

Andrew Kenealy contributed to this post. 

Authors

Image Source: Paul Morigi
       




or

Three keys to reforming government: Lessons from repairing the VA


On June 20, I moderated a conversation on the future of the Department of Veterans Affairs with Secretary Robert McDonald. When he took office almost two years ago, Secretary McDonald inherited an organization in crisis: too many veterans faced shockingly long wait-times before they received care, VA officials had allegedly falsified records, and other allegations of mismanagement abounded.

Photo: Paul Morigi

Since he was sworn into office, Secretary McDonald has led the VA through a period of ambitious reform, anchored by the MyVA program. He and his team have embraced three core strategies that are securing meaningful change. They are important insights for all government leaders, and private sector ones as well.

1. Set bold goals

Secretary McDonald’s vision is for the VA to become the number one customer-service agency in the federal government. But he and his team know that words alone won’t make this happen. They developed twelve breakthrough priorities for 2016 that will directly improve service to veterans. These actionable short-term objectives support the VA’s longer term aim to deliver an exceptional experience for our veterans. By aiming high, and also drafting a concrete roadmap, the VA has put itself on a path to success.

2. Hybridize the best of public and private sectors

To accomplish their ambitious goal, VA leadership is applying the best practices of customer-service businesses around the nation. The Secretary and his colleagues are leveraging the goodwill, resources, and expertise of both the private and public sector. To do that, the VA has brought together diverse groups of business leaders, medical professionals, government executives, and veteran advocates under their umbrella MyVA Advisory Committee. Following the examples set by private sector leaders in service provision and innovation, the VA is developing user-friendly mobile apps for veterans, modernizing its website, and seeking to make hiring practices faster, more competitive, and more efficient. And so that no good idea is left unheard, the VA has created a "shark tank” to capture and enact suggestions and recommendations for improvement from the folks who best understand daily VA operations—VA employees themselves.

3. Data, data, data

The benefits of data-driven decision making in government are well known. As led by Secretary McDonald, the VA has continued to embrace the use of data to inform its policies and improve its performance. Already a leader in the collection and publication of data, the VA has recently taken even greater strides in sharing information between its healthcare delivery agencies. In addition to collecting administrative and health-outcomes information, the VA is gathering data from veterans about what they think . Automated kiosks allow veterans to check in for appointments, and to record their level of satisfaction with the services provided.

The results that the Secretary and his team have achieved speak for themselves:

  • 5 million more appointments completed last fiscal year over the previous fiscal year
  • 7 million additional hours of care for veterans in the last two years (based on an increase in the clinical workload of 11 percent over the last two years)
  • 97 percent of appointments completed within 30 days of the veteran’s preferred date; 86 percent within 7 days; 22 percent the same day
  • Average wait times of 5 days for primary care, 6 days for specialty care, and 2 days for mental health are
  • 90 percent of veterans say they are satisfied or completely satisfied with when they got their appointment (less than 3 percent said they were dissatisfied or completely dissatisfied).
  • The backlog for disability claims—once over 600,000 claims that were more than 125 days old—is down almost 90 percent.

Thanks to Secretary McDonald’s continued commitment to modernization, the VA has made significant progress. Problems, of course, remain at the VA and the Secretary has more work to do to ensure America honors the debt it owes its veterans, but the past two years of reform have moved the Department in the right direction. His strategies are instructive for managers of change everywhere.

Fred Dews and Andrew Kenealy contributed to this post.

Authors

Image Source: © Jim Bourg / Reuters
       




or

One Step Forward, Many Steps Back for Refugees

      
 
 




or

Human rights, climate change and cross-border displacement

      
 
 




or

Principles for Transparency and Public Participation in Redistricting

Scholars from the Brookings Institution and the American Enterprise Institute are collaborating to promote transparency in redistricting. In January 2010, an advisory board of experts and representatives of good government groups was convened in order to articulate principles for transparent redistricting and to identify barriers to the public and communities who wish to create redistricting…

      
 
 




or

Terrorists and Detainees: Do We Need a New National Security Court?

In the wake of the 9/11 attacks and the capture of hundreds of suspected al Qaeda and Taliban fighters, we have been engaged in a national debate as to the proper standards and procedures for detaining “enemy combatants” and prosecuting them for war crimes. Dissatisfaction with the procedures established at Guantanamo for detention decisions and…

       




or

Targeted Killing in U.S. Counterterrorism Strategy and Law

The following is part of the Series on Counterterrorism and American Statutory Law, a joint project of the Brookings Institution, the Georgetown University Law Center, and the Hoover Institution Introduction It is a slight exaggeration to say that Barack Obama is the first president in American history to have run in part on a political…

       




or

What do Midwest working-class voters want and need?

If Donald Trump ends up facing off against Joe Biden in 2020, it will be portrayed as a fight for the hearts and souls of white working-class voters in Pennsylvania, Wisconsin, and my home state of Michigan. But what do these workers want and need? The President and his allies on the right offer a…

       




or

How Promise programs can help former industrial communities

The nation is seeing accelerating gaps in economic opportunity and prosperity between more educated, tech-savvy, knowledge workers congregating in the nation’s “superstar” cities (and a few university-town hothouses) and residents of older industrial cities and the small towns of “flyover country.” These growing divides are shaping public discourse, as policymakers and thought leaders advance recipes…

       




or

Most business incentives don’t work. Here’s how to fix them.

In 2017, the state of Wisconsin agreed to provide $4 billion in state and local tax incentives to the electronics manufacturing giant Foxconn. In return, the Taiwan-based company promised to build a new manufacturing plant in the state for flat-screen television displays and the subsequent creation of 13,000 new jobs. It didn’t happen. Those 13,000…

       




or

American workers’ safety net is broken. The COVID-19 crisis is a chance to fix it.

The COVID-19 pandemic is forcing some major adjustments to many aspects of our daily lives that will likely remain long after the crisis recedes: virtual learning, telework, and fewer hugs and handshakes, just to name a few. But in addition, let’s hope the crisis also drives a permanent overhaul of the nation’s woefully inadequate worker…

       




or

COP 21 at Paris: The issues, the actors, and the road ahead on climate change

At the end of the month, governments from nearly 200 nations will convene in Paris, France for the 21st annual U.N. climate conference (COP21). Expectations are high for COP21 as leaders aim to achieve a legally binding and universal agreement on limiting global temperature increases for the first time in over 20 years. Ahead of this…

       




or

When the champagne is finished: Why the post-Paris parade of climate euphoria is largely premature

The new international climate change agreement has received largely positive reviews despite the fact that many years of hard work will be required to actually turn “Paris” into a success. As with all international agreements, the Paris agreement too will have to be tested and proven over time. The Eiffel Tower is engulfed in fog…

       




or

6 years from the BP Deepwater Horizon oil spill: What we’ve learned, and what we shouldn’t misunderstand

Six years ago today, the BP Deepwater Horizon oil spill occurred in the U.S. Gulf of Mexico with devastating effects on the local environment and on public perception of offshore oil and gas drilling. The blowout sent toxic fluids and gas shooting up the well, leading to an explosion on board the rig that killed…

       




or

High Achievers, Tracking, and the Common Core


A curriculum controversy is roiling schools in the San Francisco Bay Area.  In the past few months, parents in the San Mateo-Foster City School District, located just south of San Francisco International Airport, voiced concerns over changes to the middle school math program. The changes were brought about by the Common Core State Standards (CCSS).  Under previous policies, most eighth graders in the district took algebra I.  Some very sharp math students, who had already completed algebra I in seventh grade, took geometry in eighth grade. The new CCSS-aligned math program will reduce eighth grade enrollments in algebra I and eliminate geometry altogether as a middle school course. 

A little background information will clarify the controversy.  Eighth grade mathematics may be the single grade-subject combination most profoundly affected by the CCSS.  In California, the push for most students to complete algebra I by the end of eighth grade has been a centerpiece of state policy, as it has been in several states influenced by the “Algebra for All” movement that began in the 1990s.  Nationwide, in 1990, about 16 percent of all eighth graders reported that they were taking an algebra or geometry course.  In 2013, the number was three times larger, and nearly half of all eighth graders (48 percent) were taking algebra or geometry.[i]  When that percentage goes down, as it is sure to under the CCSS, what happens to high achieving math students?

The parents who are expressing the most concern have kids who excel at math.  One parent in San Mateo-Foster City told The San Mateo Daily Journal, “This is really holding the advanced kids back.”[ii] The CCSS math standards recommend a single math course for seventh grade, integrating several math topics, followed by a similarly integrated math course in eighth grade.  Algebra I won’t be offered until ninth grade.  The San Mateo-Foster City School District decided to adopt a “three years into two” accelerated option.  This strategy is suggested on the Common Core website as an option that districts may consider for advanced students.  It combines the curriculum from grades seven through nine (including algebra I) into a two year offering that students can take in seventh and eighth grades.[iii]  The district will also provide—at one school site—a sequence beginning in sixth grade that compacts four years of math into three.  Both accelerated options culminate in the completion of algebra I in eighth grade.

The San Mateo-Foster City School District is home to many well-educated, high-powered professionals who work in Silicon Valley.  They are unrelentingly liberal in their politics.  Equity is a value they hold dear.[iv]  They also know that completing at least one high school math course in middle school is essential for students who wish to take AP Calculus in their senior year of high school.  As CCSS is implemented across the nation, administrators in districts with demographic profiles similar to San Mateo-Foster City will face parents of mathematically precocious kids asking whether the “common” in Common Core mandates that all students take the same math course.  Many of those districts will respond to their constituents and provide accelerated pathways (“pathway” is CCSS jargon for course sequence). 

But other districts will not.  Data show that urban schools, schools with large numbers of black and Hispanic students, and schools located in impoverished neighborhoods are reluctant to differentiate curriculum.  It is unlikely that gifted math students in those districts will be offered an accelerated option under CCSS.  The reason why can be summed up in one word: tracking.

Tracking in eighth grade math means providing different courses to students based on their prior math achievement.  The term “tracking” has been stigmatized, coming under fire for being inequitable.  Historically, where tracking existed, black, Hispanic, and disadvantaged students were often underrepresented in high-level math classes; white, Asian, and middle-class students were often over-represented.  An anti-tracking movement gained a full head of steam in the 1980s.  Tracking reformers knew that persuading high schools to de-track was hopeless.  Consequently, tracking’s critics focused reform efforts on middle schools, urging that they group students heterogeneously with all students studying a common curriculum.  That approach took hold in urban districts, but not in the suburbs.

Now the Common Core and de-tracking are linked.  Providing an accelerated math track for high achievers has become a flashpoint throughout the San Francisco Bay Area.  An October 2014 article in The San Jose Mercury News named Palo Alto, Saratoga, Cupertino, Pleasanton, and Los Gatos as districts that have announced, in response to parent pressure, that they are maintaining an accelerated math track in middle schools.  These are high-achieving, suburban districts.  Los Gatos parents took to the internet with a petition drive when a rumor spread that advanced courses would end.  Ed Source reports that 900 parents signed a petition opposing the move and board meetings on the issue were packed with opponents. The accelerated track was kept.  Piedmont established a single track for everyone, but allowed parents to apply for an accelerated option.  About twenty five percent did so.  The Mercury News story underscores the demographic pattern that is unfolding and asks whether CCSS “could cement a two-tier system, with accelerated math being the norm in wealthy areas and the exception elsewhere.”

What is CCSS’s real role here?  Does the Common Core take an explicit stand on tracking?  Not really.  But de-tracking advocates can interpret the “common” in Common Core as license to eliminate accelerated tracks for high achievers.  As a noted CCSS supporter (and tracking critic), William H. Schmidt, has stated, “By insisting on common content for all students at each grade level and in every community, the Common Core mathematics standards are in direct conflict with the concept of tracking.”[v]  Thus, tracking joins other controversial curricular ideas—e.g., integrated math courses instead of courses organized by content domains such as algebra and geometry; an emphasis on “deep,” conceptual mathematics over learning procedures and basic skills—as “dog whistles” embedded in the Common Core.  Controversial positions aren’t explicitly stated, but they can be heard by those who want to hear them.    

CCSS doesn’t have to take an outright stand on these debates in order to have an effect on policy.  For the practical questions that local grouping policies resolve—who takes what courses and when do they take them—CCSS wipes the slate clean.  There are plenty of people ready to write on that blank slate, particularly administrators frustrated by unsuccessful efforts to de-track in the past

Suburban parents are mobilized in defense of accelerated options for advantaged students.  What about kids who are outstanding math students but also happen to be poor, black, or Hispanic?  What happens to them, especially if they attend schools in which the top institutional concern is meeting the needs of kids functioning several years below grade level?  I presented a paper on this question at a December 2014 conference held by the Fordham Institute in Washington, DC.  I proposed a pilot program of “tracking for equity.”  By that term, I mean offering black, Hispanic, and poor high achievers the same opportunity that the suburban districts in the Bay Area are offering.  High achieving middle school students in poor neighborhoods would be able to take three years of math in two years and proceed on a path toward AP Calculus as high school seniors.

It is true that tracking must be done carefully.  Tracking can be conducted unfairly and has been used unjustly in the past.  One of the worst consequences of earlier forms of tracking was that low-skilled students were tracked into dead end courses that did nothing to help them academically.  These low-skilled students were disproportionately from disadvantaged communities or communities of color.  That’s not a danger in the proposal I am making.  The default curriculum, the one every student would take if not taking the advanced track, would be the Common Core.  If that’s a dead end for low achievers, Common Core supporters need to start being more honest in how they are selling the CCSS.  Moreover, to ensure that the policy gets to the students for whom it is intended, I have proposed running the pilot program in schools predominantly populated by poor, black, or Hispanic students.  The pilot won’t promote segregation within schools because the sad reality is that participating schools are already segregated.

Since I presented the paper, I have privately received negative feedback from both Algebra for All advocates and Common Core supporters.  That’s disappointing.  Because of their animus toward tracking, some critics seem to support a severe policy swing from Algebra for All, which was pursued for equity, to Algebra for None, which will be pursued for equity.  It’s as if either everyone or no one should be allowed to take algebra in eighth grade.  The argument is that allowing only some eighth graders to enroll in algebra is elitist, even if the students in question are poor students of color who are prepared for the course and likely to benefit from taking it.

The controversy raises crucial questions about the Common Core.  What’s common in the common core?  Is it the curriculum?  And does that mean the same curriculum for all?  Will CCSS serve as a curricular floor, ensuring all students are exposed to a common body of knowledge and skills?  Or will it serve as a ceiling, limiting the progress of bright students so that their achievement looks more like that of their peers?  These questions will be answered differently in different communities, and as they are, the inequities that Common Core supporters think they’re addressing may surface again in a profound form.   



[i] Loveless, T. (2008). The 2008 Brown Center Report on American Education. Retrieved from http://www.brookings.edu/research/reports/2009/02/25-education-loveless. For San Mateo-Foster City’s sequence of math courses, see: page 10 of http://smfc-ca.schoolloop.com/file/1383373423032/1229222942231/1242346905166154769.pdf 

[ii] Swartz, A. (2014, November 22). “Parents worry over losing advanced math classes: San Mateo-Foster City Elementary School District revamps offerings because of Common Core.” San Mateo Daily Journal. Retrieved from http://www.smdailyjournal.com/articles/lnews/2014-11-22/parents-worry-over-losing-advanced-math-classes-san-mateo-foster-city-elementary-school-district-revamps-offerings-because-of-common-core/1776425133822.html

[iii] Swartz, A. (2014, December 26). “Changing Classes Concern for parents, teachers: Administrators say Common Core Standards Reason for Modifications.” San Mateo Daily Journal. Retrieved from http://www.smdailyjournal.com/articles/lnews/2014-12-26/changing-classes-concern-for-parents-teachers-administrators-say-common-core-standards-reason-for-modifications/1776425135624.html

[iv] In the 2014 election, Jerry Brown (D) took 75% of Foster City’s votes for governor.  In the 2012 presidential election, Barak Obama received 71% of the vote. http://www.city-data.com/city/Foster-City-California.html

[v] Schmidt, W.H. and Burroughs, N.A. (2012) “How the Common Core Boosts Quality and Equality.” Educational Leadership, December 2012/January 2013. Vol. 70, No. 4, pp. 54-58.

Authors

     
 
 




or

2015 Brown Center Report on American Education: How Well Are American Students Learning?


Editor's Note: The introduction to the 2015 Brown Center Report on American Education appears below. Use the Table of Contents to navigate through the report online, or download a PDF of the full report.

TABLE OF CONTENTS

Part I: Girls, Boys, and Reading

Part II: Measuring Effects of the Common Core

Part III: Student Engagement


INTRODUCTION

The 2015 Brown Center Report (BCR) represents the 14th edition of the series since the first issue was published in 2000.  It includes three studies.  Like all previous BCRs, the studies explore independent topics but share two characteristics: they are empirical and based on the best evidence available.  The studies in this edition are on the gender gap in reading, the impact of the Common Core State Standards -- English Language Arts on reading achievement, and student engagement.

Part one examines the gender gap in reading.  Girls outscore boys on practically every reading test given to a large population.  And they have for a long time.  A 1942 Iowa study found girls performing better than boys on tests of reading comprehension, vocabulary, and basic language skills.  Girls have outscored boys on every reading test ever given by the National Assessment of Educational Progress (NAEP)—the first long term trend test was administered in 1971—at ages nine, 13, and 17.  The gap is not confined to the U.S.  Reading tests administered as part of the Progress in International Reading Literacy Study (PIRLS) and the Program for International Student Assessment (PISA) reveal that the gender gap is a worldwide phenomenon.  In more than sixty countries participating in the two assessments, girls are better readers than boys. 

Perhaps the most surprising finding is that Finland, celebrated for its extraordinary performance on PISA for over a decade, can take pride in its high standing on the PISA reading test solely because of the performance of that nation’s young women.  With its 62 point gap, Finland has the largest gender gap of any PISA participant, with girls scoring 556 and boys scoring 494 points (the OECD average is 496, with a standard deviation of 94).   If Finland were only a nation of young men, its PISA ranking would be mediocre.

Part two is about reading achievement, too. More specifically, it’s about reading and the English Language Arts standards of the Common Core (CCSS-ELA).  It’s also about an important decision that policy analysts must make when evaluating public policies—the determination of when a policy begins. How can CCSS be properly evaluated? 

Two different indexes of CCSS-ELA implementation are presented, one based on 2011 data and the other on data collected in 2013.  In both years, state education officials were surveyed about their Common Core implementation efforts.  Because forty-six states originally signed on to the CCSS-ELA—and with at least forty still on track for full implementation by 2016—little variability exists among the states in terms of standards policy.  Of course, the four states that never adopted CCSS-ELA can serve as a small control group.  But variation is also found in how the states are implementing CCSS.  Some states are pursuing an array of activities and aiming for full implementation earlier rather than later.  Others have a narrow, targeted implementation strategy and are proceeding more slowly. 

The analysis investigates whether CCSS-ELA implementation is related to 2009-2013 gains on the fourth grade NAEP reading test.  The analysis cannot verify causal relationships between the two variables, only correlations.  States that have aggressively implemented CCSS-ELA (referred to as “strong” implementers in the study) evidence a one to one and one-half point larger gain on the NAEP scale compared to non-adopters of the standards.  This association is similar in magnitude to an advantage found in a study of eighth grade math achievement in last year’s BCR.  Although positive, these effects are quite small.  When the 2015 NAEP results are released this winter, it will be important for the fate of the Common Core project to see if strong implementers of the CCSS-ELA can maintain their momentum.

Part three is on student engagement.  PISA tests fifteen-year-olds on three subjects—reading, math, and science—every three years.  It also collects a wealth of background information from students, including their attitudes toward school and learning.  When the 2012 PISA results were released, PISA analysts published an accompanying volume, Ready to Learn: Students’ Engagement, Drive, and Self-Beliefs, exploring topics related to student engagement.

Part three provides secondary analysis of several dimensions of engagement found in the PISA report.  Intrinsic motivation, the internal rewards that encourage students to learn, is an important component of student engagement.  National scores on PISA’s index of intrinsic motivation to learn mathematics are compared to national PISA math scores.  Surprisingly, the relationship is negative.  Countries with highly motivated kids tend to score lower on the math test; conversely, higher-scoring nations tend to have less-motivated kids. 

The same is true for responses to the statements, “I do mathematics because I enjoy it,” and “I look forward to my mathematics lessons.”  Countries with students who say that they enjoy math or look forward to their math lessons tend to score lower on the PISA math test compared to countries where students respond negatively to the statements.  These counterintuitive finding may be influenced by how terms such as “enjoy” and “looking forward” are interpreted in different cultures.  Within-country analyses address that problem.  The correlation coefficients for within-country, student-level associations of achievement and other components of engagement run in the anticipated direction—they are positive.  But they are also modest in size, with correlation coefficients of 0.20 or less. 

Policymakers are interested in questions requiring analysis of aggregated data—at the national level, that means between-country data.  When countries increase their students’ intrinsic motivation to learn math, is there a concomitant increase in PISA math scores?  Data from 2003 to 2012 are examined.  Seventeen countries managed to increase student motivation, but their PISA math scores fell an average of 3.7 scale score points.  Fourteen countries showed no change on the index of intrinsic motivation—and their PISA scores also evidenced little change.  Eight countries witnessed a decline in intrinsic motivation.  Inexplicably, their PISA math scores increased by an average of 10.3 scale score points.  Motivation down, achievement up.

Correlation is not causation.  Moreover, the absence of a positive correlation—or in this case, the presence of a negative correlation—is not refutation of a possible positive relationship.  The lesson here is not that policymakers should adopt the most effective way of stamping out student motivation.  The lesson is that the level of analysis matters when analyzing achievement data.  Policy reports must be read warily—especially those freely offering policy recommendations.  Beware of analyses that exclusively rely on within- or between-country test data without making any attempt to reconcile discrepancies at other levels of analysis.  Those analysts could be cherry-picking the data.  Also, consumers of education research should grant more credence to approaches modeling change over time (as in difference in difference models) than to cross-sectional analyses that only explore statistical relationships at a single point in time. 

  Part I: Girls, Boys, and Reading »

Downloads

Authors

Image Source: Elizabeth Sablich
     
 
 




or

Measuring effects of the Common Core


Part II of the 2015 Brown Center Report on American Education

Over the next several years, policy analysts will evaluate the impact of the Common Core State Standards (CCSS) on U.S. education.  The task promises to be challenging.  The question most analysts will focus on is whether the CCSS is good or bad policy.  This section of the Brown Center Report (BCR) tackles a set of seemingly innocuous questions compared to the hot-button question of whether Common Core is wise or foolish.  The questions all have to do with when Common Core actually started, or more precisely, when the Common Core started having an effect on student learning.  And if it hasn’t yet had an effect, how will we know that CCSS has started to influence student achievement? 

The analysis below probes this issue empirically, hopefully persuading readers that deciding when a policy begins is elemental to evaluating its effects.  The question of a policy’s starting point is not always easy to answer.  Yet the answer has consequences.  You can’t figure out whether a policy worked or not unless you know when it began.[i] 

The analysis uses surveys of state implementation to model different CCSS starting points for states and produces a second early report card on how CCSS is doing.  The first report card, focusing on math, was presented in last year’s BCR.  The current study updates state implementation ratings that were presented in that report and extends the analysis to achievement in reading.  The goal is not only to estimate CCSS’s early impact, but also to lay out a fair approach for establishing when the Common Core’s impact began—and to do it now before data are generated that either critics or supporters can use to bolster their arguments.  The experience of No Child Left Behind (NCLB) illustrates this necessity.

Background

After the 2008 National Assessment of Educational Progress (NAEP) scores were released, former Secretary of Education Margaret Spellings claimed that the new scores showed “we are on the right track.”[ii] She pointed out that NAEP gains in the previous decade, 1999-2009, were much larger than in prior decades.  Mark Schneider of the American Institutes of Research (and a former Commissioner of the National Center for Education Statistics [NCES]) reached a different conclusion. He compared NAEP gains from 1996-2003 to 2003-2009 and declared NCLB’s impact disappointing.  “The pre-NCLB gains were greater than the post-NCLB gains.”[iii]  It is important to highlight that Schneider used the 2003 NAEP scores as the starting point for assessing NCLB.  A report from FairTest on the tenth anniversary of NCLB used the same demarcation for pre- and post-NCLB time frames.[iv]  FairTest is an advocacy group critical of high stakes testing—and harshly critical of NCLB—but if the 2003 starting point for NAEP is accepted, its conclusion is indisputable, “NAEP score improvement slowed or stopped in both reading and math after NCLB was implemented.” 

Choosing 2003 as NCLB’s starting date is intuitively appealing.  The law was introduced, debated, and passed by Congress in 2001.  President Bush signed NCLB into law on January 8, 2002.  It takes time to implement any law.  The 2003 NAEP is arguably the first chance that the assessment had to register NCLB’s effects. 

Selecting 2003 is consequential, however.  Some of the largest gains in NAEP’s history were registered between 2000 and 2003.  Once 2003 is established as a starting point (or baseline), pre-2003 gains become “pre-NCLB.”  But what if the 2003 NAEP scores were influenced by NCLB? Experiments evaluating the effects of new drugs collect baseline data from subjects before treatment, not after the treatment has begun.   Similarly, evaluating the effects of public policies require that baseline data are not influenced by the policies under evaluation.   

Avoiding such problems is particularly difficult when state or local policies are adopted nationally.  The federal effort to establish a speed limit of 55 miles per hour in the 1970s is a good example.  Several states already had speed limits of 55 mph or lower prior to the federal law’s enactment.  Moreover, a few states lowered speed limits in anticipation of the federal limit while the bill was debated in Congress.  On the day President Nixon signed the bill into law—January 2, 1974—the Associated Press reported that only 29 states would be required to lower speed limits.  Evaluating the effects of the 1974 law with national data but neglecting to adjust for what states were already doing would obviously yield tainted baseline data.

There are comparable reasons for questioning 2003 as a good baseline for evaluating NCLB’s effects.  The key components of NCLB’s accountability provisions—testing students, publicizing the results, and holding schools accountable for results—were already in place in nearly half the states.  In some states they had been in place for several years.  The 1999 iteration of Quality Counts, Education Week’s annual report on state-level efforts to improve public education, entitled Rewarding Results, Punishing Failure, was devoted to state accountability systems and the assessments underpinning them. Testing and accountability are especially important because they have drawn fire from critics of NCLB, a law that wasn’t passed until years later.

The Congressional debate of NCLB legislation took all of 2001, allowing states to pass anticipatory policies.  Derek Neal and Diane Whitmore Schanzenbach reported that “with the passage of NCLB lurking on the horizon,” Illinois placed hundreds of schools on a watch list and declared that future state testing would be high stakes.[v] In the summer and fall of 2002, with NCLB now the law of the land, state after state released lists of schools falling short of NCLB’s requirements.  Then the 2002-2003 school year began, during which the 2003 NAEP was administered.  Using 2003 as a NAEP baseline assumes that none of these activities—previous accountability systems, public lists of schools in need of improvement, anticipatory policy shifts—influenced achievement.  That is unlikely.[vi]

The Analysis

Unlike NCLB, there was no “pre-CCSS” state version of Common Core.  States vary in how quickly and aggressively they have implemented CCSS.  For the BCR analyses, two indexes were constructed to model CCSS implementation.  They are based on surveys of state education agencies and named for the two years that the surveys were conducted.  The 2011 survey reported the number of programs (e.g., professional development, new materials) on which states reported spending federal funds to implement CCSS.  Strong implementers spent money on more activities.  The 2011 index was used to investigate eighth grade math achievement in the 2014 BCR.  A new implementation index was created for this year’s study of reading achievement.  The 2013 index is based on a survey asking states when they planned to complete full implementation of CCSS in classrooms.  Strong states aimed for full implementation by 2012-2013 or earlier.      

Fourth grade NAEP reading scores serve as the achievement measure.  Why fourth grade and not eighth?  Reading instruction is a key activity of elementary classrooms but by eighth grade has all but disappeared.  What remains of “reading” as an independent subject, which has typically morphed into the study of literature, is subsumed under the English-Language Arts curriculum, a catchall term that also includes writing, vocabulary, listening, and public speaking.  Most students in fourth grade are in self-contained classes; they receive instruction in all subjects from one teacher.  The impact of CCSS on reading instruction—the recommendation that non-fiction take a larger role in reading materials is a good example—will be concentrated in the activities of a single teacher in elementary schools. The burden for meeting CCSS’s press for non-fiction, on the other hand, is expected to be shared by all middle and high school teachers.[vii] 

Results

Table 2-1 displays NAEP gains using the 2011 implementation index.  The four year period between 2009 and 2013 is broken down into two parts: 2009-2011 and 2011-2013.  Nineteen states are categorized as “strong” implementers of CCSS on the 2011 index, and from 2009-2013, they outscored the four states that did not adopt CCSS by a little more than one scale score point (0.87 vs. -0.24 for a 1.11 difference).  The non-adopters are the logical control group for CCSS, but with only four states in that category—Alaska, Nebraska, Texas, and Virginia—it is sensitive to big changes in one or two states.  Alaska and Texas both experienced a decline in fourth grade reading scores from 2009-2013.

The 1.11 point advantage in reading gains for strong CCSS implementers is similar to the 1.27 point advantage reported last year for eighth grade math.  Both are small.  The reading difference in favor of CCSS is equal to approximately 0.03 standard deviations of the 2009 baseline reading score.  Also note that the differences were greater in 2009-2011 than in 2011-2013 and that the “medium” implementers performed as well as or better than the strong implementers over the entire four year period (gain of 0.99).

Table 2-2 displays calculations using the 2013 implementation index.  Twelve states are rated as strong CCSS implementers, seven fewer than on the 2011 index.[viii]  Data for the non-adopters are the same as in the previous table.  In 2009-2013, the strong implementers gained 1.27 NAEP points compared to -0.24 among the non-adopters, a difference of 1.51 points.  The thirty-four states rated as medium implementers gained 0.82.  The strong implementers on this index are states that reported full implementation of CCSS-ELA by 2013.  Their larger gain in 2011-2013 (1.08 points) distinguishes them from the strong implementers in the previous table.  The overall advantage of 1.51 points over non-adopters represents about 0.04 standard deviations of the 2009 NAEP reading score, not a difference with real world significance.  Taken together, the 2011 and 2013 indexes estimate that NAEP reading gains from 2009-2013 were one to one and one-half scale score points larger in the strong CCSS implementation states compared to the states that did not adopt CCSS.

Common Core and Reading Content

As noted above, the 2013 implementation index is based on when states scheduled full implementation of CCSS in classrooms.  Other than reading achievement, does the index seem to reflect changes in any other classroom variable believed to be related to CCSS implementation?  If the answer is “yes,” that would bolster confidence that the index is measuring changes related to CCSS implementation. 

Let’s examine the types of literature that students encounter during instruction.  Perhaps the most controversial recommendation in the CCSS-ELA standards is the call for teachers to shift the content of reading materials away from stories and other fictional forms of literature in favor of more non-fiction.  NAEP asks fourth grade teachers the extent to which they teach fiction and non-fiction over the course of the school year (see Figure 2-1). 

Historically, fiction dominates fourth grade reading instruction.  It still does.  The percentage of teachers reporting that they teach fiction to a “large extent” exceeded the percentage answering “large extent” for non-fiction by 23 points in 2009 and 25 points in 2011.  In 2013, the difference narrowed to only 15 percentage points, primarily because of non-fiction’s increased use.  Fiction still dominated in 2013, but not by as much as in 2009.

The differences reported in Table 2-3 are national indicators of fiction’s declining prominence in fourth grade reading instruction.  What about the states?  We know that they were involved to varying degrees with the implementation of Common Core from 2009-2013.  Is there evidence that fiction’s prominence was more likely to weaken in states most aggressively pursuing CCSS implementation? 

Table 2-3 displays the data tackling that question.  Fourth grade teachers in strong implementation states decisively favored the use of fiction over non-fiction in 2009 and 2011.  But the prominence of fiction in those states experienced a large decline in 2013 (-12.4 percentage points).  The decline for the entire four year period, 2009-2013, was larger in the strong implementation states (-10.8) than in the medium implementation (-7.5) or non-adoption states (-9.8).  

Conclusion

This section of the Brown Center Report analyzed NAEP data and two indexes of CCSS implementation, one based on data collected in 2011, the second from data collected in 2013.  NAEP scores for 2009-2013 were examined.  Fourth grade reading scores improved by 1.11 scale score points in states with strong implementation of CCSS compared to states that did not adopt CCSS.  A similar comparison in last year’s BCR found a 1.27 point difference on NAEP’s eighth grade math test, also in favor of states with strong implementation of CCSS.  These differences, although certainly encouraging to CCSS supporters, are quite small, amounting to (at most) 0.04 standard deviations (SD) on the NAEP scale.  A threshold of 0.20 SD—five times larger—is often invoked as the minimum size for a test score change to be regarded as noticeable.  The current study’s findings are also merely statistical associations and cannot be used to make causal claims.  Perhaps other factors are driving test score changes, unmeasured by NAEP or the other sources of data analyzed here. 

The analysis also found that fourth grade teachers in strong implementation states are more likely to be shifting reading instruction from fiction to non-fiction texts.  That trend should be monitored closely to see if it continues.  Other events to keep an eye on as the Common Core unfolds include the following:

1.  The 2015 NAEP scores, typically released in the late fall, will be important for the Common Core.  In most states, the first CCSS-aligned state tests will be given in the spring of 2015.  Based on the earlier experiences of Kentucky and New York, results are expected to be disappointing.  Common Core supporters can respond by explaining that assessments given for the first time often produce disappointing results.  They will also claim that the tests are more rigorous than previous state assessments.  But it will be difficult to explain stagnant or falling NAEP scores in an era when implementing CCSS commands so much attention.   

2.  Assessment will become an important implementation variable in 2015 and subsequent years.  For analysts, the strategy employed here, modeling different indicators based on information collected at different stages of implementation, should become even more useful.  Some states are planning to use Smarter Balanced Assessments, others are using the Partnership for Assessment of Readiness for College and Careers (PARCC), and still others are using their own homegrown tests.   To capture variation among the states on this important dimension of implementation, analysts will need to use indicators that are up-to-date.

3.  The politics of Common Core injects a dynamic element into implementation.  The status of implementation is constantly changing.  States may choose to suspend, to delay, or to abandon CCSS.  That will require analysts to regularly re-configure which states are considered “in” Common Core and which states are “out.”  To further complicate matters, states may be “in” some years and “out” in others.

A final word.  When the 2014 BCR was released, many CCSS supporters commented that it is too early to tell the effects of Common Core.  The point that states may need more time operating under CCSS to realize its full effects certainly has merit.  But that does not discount everything states have done so far—including professional development, purchasing new textbooks and other instructional materials, designing new assessments, buying and installing computer systems, and conducting hearings and public outreach—as part of implementing the standards.  Some states are in their fifth year of implementation.  It could be that states need more time, but innovations can also produce their biggest “pop” earlier in implementation rather than later.  Kentucky was one of the earliest states to adopt and implement CCSS.  That state’s NAEP fourth grade reading score declined in both 2009-2011 and 2011-2013.  The optimism of CCSS supporters is understandable, but a one and a half point NAEP gain might be as good as it gets for CCSS.



[i] These ideas were first introduced in a 2013 Brown Center Chalkboard post I authored, entitled, “When Does a Policy Start?”

[ii] Maria Glod, “Since NCLB, Math and Reading Scores Rise for Ages 9 and 13,” Washington Post, April 29, 2009.

[iii] Mark Schneider, “NAEP Math Results Hold Bad News for NCLB,” AEIdeas (Washington, D.C.: American Enterprise Institute, 2009).

[iv] Lisa Guisbond with Monty Neill and Bob Schaeffer, NCLB’s Lost Decade for Educational Progress: What Can We Learn from this Policy Failure? (Jamaica Plain, MA: FairTest, 2012).

[v] Derek Neal and Diane Schanzenbach, “Left Behind by Design: Proficiency Counts and Test-Based Accountability,” NBER Working Paper No. W13293 (Cambridge: National Bureau of Economic Research, 2007), 13.

[vi] Careful analysts of NCLB have allowed different states to have different starting dates: see Thomas Dee and Brian A. Jacob, “Evaluating NCLB,” Education Next 10, no. 3 (Summer 2010); Manyee Wong, Thomas D. Cook, and Peter M. Steiner, “No Child Left Behind: An Interim Evaluation of Its Effects on Learning Using Two Interrupted Time Series Each with Its Own Non-Equivalent Comparison Series,” Working Paper 09-11 (Evanston, IL: Northwestern University Institute for Policy Research, 2009).

[vii] Common Core State Standards Initiative. “English Language Arts Standards, Key Design Consideration.” Retrieved from: http://www.corestandards.org/ELA-Literacy/introduction/key-design-consideration/

[viii] Twelve states shifted downward from strong to medium and five states shifted upward from medium to strong, netting out to a seven state swing.

« Part I: Girls, boys, and reading Part III: Student Engagement »

Downloads

Authors

     
 
 




or

Common Core and classroom instruction: The good, the bad, and the ugly


This post continues a series begun in 2014 on implementing the Common Core State Standards (CCSS).  The first installment introduced an analytical scheme investigating CCSS implementation along four dimensions:  curriculum, instruction, assessment, and accountability.  Three posts focused on curriculum.  This post turns to instruction.  Although the impact of CCSS on how teachers teach is discussed, the post is also concerned with the inverse relationship, how decisions that teachers make about instruction shape the implementation of CCSS.

A couple of points before we get started.  The previous posts on curriculum led readers from the upper levels of the educational system—federal and state policies—down to curricular decisions made “in the trenches”—in districts, schools, and classrooms.  Standards emanate from the top of the system and are produced by politicians, policymakers, and experts.  Curricular decisions are shared across education’s systemic levels.  Instruction, on the other hand, is dominated by practitioners.  The daily decisions that teachers make about how to teach under CCSS—and not the idealizations of instruction embraced by upper-level authorities—will ultimately determine what “CCSS instruction” really means.

I ended the last post on CCSS by describing how curriculum and instruction can be so closely intertwined that the boundary between them is blurred.  Sometimes stating a precise curricular objective dictates, or at least constrains, the range of instructional strategies that teachers may consider.  That post focused on English-Language Arts.  The current post focuses on mathematics in the elementary grades and describes examples of how CCSS will shape math instruction.  As a former elementary school teacher, I offer my own personal opinion on these effects.

The Good

Certain aspects of the Common Core, when implemented, are likely to have a positive impact on the instruction of mathematics. For example, Common Core stresses that students recognize fractions as numbers on a number line.  The emphasis begins in third grade:

CCSS.MATH.CONTENT.3.NF.A.2
Understand a fraction as a number on the number line; represent fractions on a number line diagram.

CCSS.MATH.CONTENT.3.NF.A.2.A
Represent a fraction 1/b on a number line diagram by defining the interval from 0 to 1 as the whole and partitioning it into b equal parts. Recognize that each part has size 1/b and that the endpoint of the part based at 0 locates the number 1/b on the number line.

CCSS.MATH.CONTENT.3.NF.A.2.B
Represent a fraction a/b on a number line diagram by marking off a lengths 1/b from 0. Recognize that the resulting interval has size a/b and that its endpoint locates the number a/b on the number line.


When I first read this section of the Common Core standards, I stood up and cheered.  Berkeley mathematician Hung-Hsi Wu has been working with teachers for years to get them to understand the importance of using number lines in teaching fractions.[1] American textbooks rely heavily on part-whole representations to introduce fractions.  Typically, students see pizzas and apples and other objects—typically other foods or money—that are divided up into equal parts.  Such models are limited.  They work okay with simple addition and subtraction.  Common denominators present a bit of a challenge, but ½ pizza can be shown to be also 2/4, a half dollar equal to two quarters, and so on. 

With multiplication and division, all the little tricks students learned with whole number arithmetic suddenly go haywire.  Students are accustomed to the fact that multiplying two whole numbers yields a product that is larger than either number being multiplied: 4 X 5 = 20 and 20 is larger than both 4 and 5.[2]  How in the world can ¼ X 1/5 = 1/20, a number much smaller than either 1/4or 1/5?  The part-whole representation has convinced many students that fractions are not numbers.  Instead, they are seen as strange expressions comprising two numbers with a small horizontal bar separating them. 

I taught sixth grade but occasionally visited my colleagues’ classes in the lower grades.  I recall one exchange with second or third graders that went something like this:

“Give me a number between seven and nine.”  Giggles. 

“Eight!” they shouted. 

“Give me a number between two and three.”  Giggles.

“There isn’t one!” they shouted. 

“Really?” I’d ask and draw a number line.  After spending some time placing whole numbers on the number line, I’d observe,  “There’s a lot of space between two and three.  Is it just empty?” 

Silence.  Puzzled little faces.  Then a quiet voice.  “Two and a half?”

You have no idea how many children do not make the transition to understanding fractions as numbers and because of stumbling at this crucial stage, spend the rest of their careers as students of mathematics convinced that fractions are an impenetrable mystery.   And  that’s not true of just students.  California adopted a test for teachers in the 1980s, the California Basic Educational Skills Test (CBEST).  Beginning in 1982, even teachers already in the classroom had to pass it.   I made a nice after-school and summer income tutoring colleagues who didn’t know fractions from Fermat’s Last Theorem.  To be fair, primary teachers, teaching kindergarten or grades 1-2, would not teach fractions as part of their math curriculum and probably hadn’t worked with a fraction in decades.  So they are no different than non-literary types who think Hamlet is just a play about a young guy who can’t make up his mind, has a weird relationship with his mother, and winds up dying at the end.

Division is the most difficult operation to grasp for those arrested at the part-whole stage of understanding fractions.  A problem that Liping Ma posed to teachers is now legendary.[3]

She asked small groups of American and Chinese elementary teachers to divide 1 ¾ by ½ and to create a word problem that illustrates the calculation.  All 72 Chinese teachers gave the correct answer and 65 developed an appropriate word problem.  Only nine of the 23 American teachers solved the problem correctly.  A single American teacher was able to devise an appropriate word problem.  Granted, the American sample was not selected to be representative of American teachers as a whole, but the stark findings of the exercise did not shock anyone who has worked closely with elementary teachers in the U.S.  They are often weak at math.  Many of the teachers in Ma’s study had vague ideas of an “invert and multiply” rule but lacked a conceptual understanding of why it worked.

A linguistic convention exacerbates the difficulty.  Students may cling to the mistaken notion that “dividing in half” means “dividing by one-half.”  It does not.  Dividing in half means dividing by two.  The number line can help clear up such confusion.  Consider a basic, whole-number division problem for which third graders will already know the answer:  8 divided by 2 equals 4.   It is evident that a segment 8 units in length (measured from 0 to 8) is divided by a segment 2 units in length (measured from 0 to 2) exactly 4 times.  Modeling 12 divided by 2 and other basic facts with 2 as a divisor will convince students that whole number division works quite well on a number line. 

Now consider the number ½ as a divisor.  It will become clear to students that 8 divided by ½ equals 16, and they can illustrate that fact on a number line by showing how a segment ½ units in length divides a segment 8 units in length exactly 16 times; it divides a segment 12 units in length 24 times; and so on.  Students will be relieved to discover that on a number line division with fractions works the same as division with whole numbers.

Now, let’s return to Liping Ma’s problem: 1 ¾ divided by ½.   This problem would not be presented in third grade, but it might be in fifth or sixth grades.  Students who have been working with fractions on a number line for two or three years will have little trouble solving it.  They will see that the problem simply asks them to divide a line segment of 1 3/4 units by a segment of ½ units.  The answer is 3 ½ .  Some students might estimate that the solution is between 3 and 4 because 1 ¾ lies between 1 ½ and 2, which on the number line are the points at which the ½ unit segment, laid end on end, falls exactly three and four times.  Other students will have learned about reciprocals and that multiplication and division are inverse operations.  They will immediately grasp that dividing by ½ is the same as multiplying by 2—and since 1 ¾ x 2 = 3 ½, that is the answer.  Creating a word problem involving string or rope or some other linearly measured object is also surely within their grasp.

Conclusion

I applaud the CCSS for introducing number lines and fractions in third grade.  I believe it will instill in children an important idea: fractions are numbers.  That foundational understanding will aid them as they work with more abstract representations of fractions in later grades.   Fractions are a monumental barrier for kids who struggle with math, so the significance of this contribution should not be underestimated.

I mentioned above that instruction and curriculum are often intertwined.  I began this series of posts by defining curriculum as the “stuff” of learning—the content of what is taught in school, especially as embodied in the materials used in instruction.  Instruction refers to the “how” of teaching—how teachers organize, present, and explain those materials.  It’s each teacher’s repertoire of instructional strategies and techniques that differentiates one teacher from another even as they teach the same content.  Choosing to use a number line to teach fractions is obviously an instructional decision, but it also involves curriculum.  The number line is mathematical content, not just a teaching tool.

Guiding third grade teachers towards using a number line does not guarantee effective instruction.  In fact, it is reasonable to expect variation in how teachers will implement the CCSS standards listed above.  A small body of research exists to guide practice. One of the best resources for teachers to consult is a practice guide published by the What Works Clearinghouse: Developing Effective Fractions Instruction for Kindergarten Through Eighth Grade (see full disclosure below).[4]  The guide recommends the use of number lines as its second recommendation, but it also states that the evidence supporting the effectiveness of number lines in teaching fractions is inferred from studies involving whole numbers and decimals.  We need much more research on how and when number lines should be used in teaching fractions.

Professor Wu states the following, “The shift of emphasis from models of a fraction in the initial stage to an almost exclusive model of a fraction as a point on the number line can be done gradually and gracefully beginning somewhere in grade four. This shift is implicit in the Common Core Standards.”[5]  I agree, but the shift is also subtle.  CCSS standards include the use of other representations—fraction strips, fraction bars, rectangles (which are excellent for showing multiplication of two fractions) and other graphical means of modeling fractions.  Some teachers will manage the shift to number lines adroitly—and others will not.  As a consequence, the quality of implementation will vary from classroom to classroom based on the instructional decisions that teachers make.  

The current post has focused on what I believe to be a positive aspect of CCSS based on the implementation of the standards through instruction.  Future posts in the series—covering the “bad” and the “ugly”—will describe aspects of instruction on which I am less optimistic.



[1] See H. Wu (2014). “Teaching Fractions According to the Common Core Standards,” https://math.berkeley.edu/~wu/CCSS-Fractions_1.pdf. Also see "What's Sophisticated about Elementary Mathematics?" http://www.aft.org/sites/default/files/periodicals/wu_0.pdf

[2] Students learn that 0 and 1 are exceptions and have their own special rules in multiplication.

[3] Liping Ma, Knowing and Teaching Elementary Mathematics.

[4] The practice guide can be found at: http://ies.ed.gov/ncee/wwc/pdf/practice_guides/fractions_pg_093010.pdf I serve as a content expert in elementary mathematics for the What Works Clearinghouse.  I had nothing to do, however, with the publication cited.

[5] Wu, page 3.

Authors

     
 
 




or

Implementing Common Core: The problem of instructional time


This is part two of my analysis of instruction and Common Core’s implementation.  I dubbed the three-part examination of instruction “The Good, The Bad, and the Ugly.”  Having discussed “the “good” in part one, I now turn to “the bad.”  One particular aspect of the Common Core math standards—the treatment of standard algorithms in whole number arithmetic—will lead some teachers to waste instructional time.

A Model of Time and Learning

In 1963, psychologist John B. Carroll published a short essay, “A Model of School Learning” in Teachers College Record.  Carroll proposed a parsimonious model of learning that expressed the degree of learning (or what today is commonly called achievement) as a function of the ratio of time spent on learning to the time needed to learn.     

The numerator, time spent learning, has also been given the term opportunity to learn.  The denominator, time needed to learn, is synonymous with student aptitude.  By expressing aptitude as time needed to learn, Carroll refreshingly broke through his era’s debate about the origins of intelligence (nature vs. nurture) and the vocabulary that labels students as having more or less intelligence. He also spoke directly to a primary challenge of teaching: how to effectively produce learning in classrooms populated by students needing vastly different amounts of time to learn the exact same content.[i] 

The source of that variation is largely irrelevant to the constraints placed on instructional decisions.  Teachers obviously have limited control over the denominator of the ratio (they must take kids as they are) and less than one might think over the numerator.  Teachers allot time to instruction only after educational authorities have decided the number of hours in the school day, the number of days in the school year, the number of minutes in class periods in middle and high schools, and the amount of time set aside for lunch, recess, passing periods, various pull-out programs, pep rallies, and the like.  There are also announcements over the PA system, stray dogs that may wander into the classroom, and other unscheduled encroachments on instructional time.

The model has had a profound influence on educational thought.  As of July 5, 2015, Google Scholar reported 2,931 citations of Carroll’s article.  Benjamin Bloom’s “mastery learning” was deeply influenced by Carroll.  It is predicated on the idea that optimal learning occurs when time spent on learning—rather than content—is allowed to vary, providing to each student the individual amount of time he or she needs to learn a common curriculum.  This is often referred to as “students working at their own pace,” and progress is measured by mastery of content rather than seat time. David C. Berliner’s 1990 discussion of time includes an analysis of mediating variables in the numerator of Carroll’s model, including the amount of time students are willing to spend on learning.  Carroll called this persistence, and Berliner links the construct to student engagement and time on task—topics of keen interest to researchers today.  Berliner notes that although both are typically described in terms of motivation, they can be measured empirically in increments of time.     

Most applications of Carroll’s model have been interested in what happens when insufficient time is provided for learning—in other words, when the numerator of the ratio is significantly less than the denominator.  When that happens, students don’t have an adequate opportunity to learn.  They need more time. 

As applied to Common Core and instruction, one should also be aware of problems that arise from the inefficient distribution of time.  Time is a limited resource that teachers deploy in the production of learning.  Below I discuss instances when the CCSS-M may lead to the numerator in Carroll’s model being significantly larger than the denominator—when teachers spend more time teaching a concept or skill than is necessary.  Because time is limited and fixed, wasted time on one topic will shorten the amount of time available to teach other topics.  Excessive instructional time may also negatively affect student engagement.  Students who have fully learned content that continues to be taught may become bored; they must endure instruction that they do not need.

Standard Algorithms and Alternative Strategies

Jason Zimba, one of the lead authors of the Common Core Math standards, and Barry Garelick, a critic of the standards, had a recent, interesting exchange about when standard algorithms are called for in the CCSS-M.  A standard algorithm is a series of steps designed to compute accurately and quickly.  In the U.S., students are typically taught the standard algorithms of addition, subtraction, multiplication, and division with whole numbers.  Most readers of this post will recognize the standard algorithm for addition.  It involves lining up two or more multi-digit numbers according to place-value, with one number written over the other, and adding the columns from right to left with “carrying” (or regrouping) as needed.

The standard algorithm is the only algorithm required for students to learn, although others are mentioned beginning with the first grade standards.  Curiously, though, CCSS-M doesn’t require students to know the standard algorithms for addition and subtraction until fourth grade.  This opens the door for a lot of wasted time.  Garelick questioned the wisdom of teaching several alternative strategies for addition.  He asked whether, under the Common Core, only the standard algorithm could be taught—or at least, could it be taught first. As he explains:

Delaying teaching of the standard algorithm until fourth grade and relying on place value “strategies” and drawings to add numbers is thought to provide students with the conceptual understanding of adding and subtracting multi-digit numbers. What happens, instead, is that the means to help learn, explain or memorize the procedure become a procedure unto itself and students are required to use inefficient cumbersome methods for two years. This is done in the belief that the alternative approaches confer understanding, so are superior to the standard algorithm. To teach the standard algorithm first would in reformers’ minds be rote learning. Reformers believe that by having students using strategies in lieu of the standard algorithm, students are still learning “skills” (albeit inefficient and confusing ones), and these skills support understanding of the standard algorithm. Students are left with a panoply of methods (praised as a good thing because students should have more than one way to solve problems), that confuse more than enlighten. 

 

Zimba responded that the standard algorithm could, indeed, be the only method taught because it meets a crucial test: reinforcing knowledge of place value and the properties of operations.  He goes on to say that other algorithms also may be taught that are consistent with the standards, but that the decision to do so is left in the hands of local educators and curriculum designers:

In short, the Common Core requires the standard algorithm; additional algorithms aren’t named, and they aren’t required…Standards can’t settle every disagreement—nor should they. As this discussion of just a single slice of the math curriculum illustrates, teachers and curriculum authors following the standards still may, and still must, make an enormous range of decisions.

 

Zimba defends delaying mastery of the standard algorithm until fourth grade, referring to it as a “culminating” standard that he would, if he were teaching, introduce in earlier grades.  Zimba illustrates the curricular progression he would employ in a table, showing that he would introduce the standard algorithm for addition late in first grade (with two-digit addends) and then extend the complexity of its use and provide practice towards fluency until reaching the culminating standard in fourth grade. Zimba would introduce the subtraction algorithm in second grade and similarly ramp up its complexity until fourth grade.

 

It is important to note that in CCSS-M the word “algorithm” appears for the first time (in plural form) in the third grade standards:

 

3.NBT.2  Fluently add and subtract within 1000 using strategies and algorithms based on place value, properties of operations, and/or the relationship between addition and subtraction.

 

The term “strategies and algorithms” is curious.  Zimba explains, “It is true that the word ‘algorithms’ here is plural, but that could be read as simply leaving more choice in the hands of the teacher about which algorithm(s) to teach—not as a requirement for each student to learn two or more general algorithms for each operation!” 

 

I have described before the “dog whistles” embedded in the Common Core, signals to educational progressives—in this case, math reformers—that  despite these being standards, the CCSS-M will allow them great latitude.  Using the plural “algorithms” in this third grade standard and not specifying the standard algorithm until fourth grade is a perfect example of such a dog whistle.

 

Why All the Fuss about Standard Algorithms?

It appears that the Common Core authors wanted to reach a political compromise on standard algorithms. 

 

Standard algorithms were a key point of contention in the “Math Wars” of the 1990s.   The 1997 California Framework for Mathematics required that students know the standard algorithms for all four operations—addition, subtraction, multiplication, and division—by the end of fourth grade.[ii]  The 2000 Massachusetts Mathematics Curriculum Framework called for learning the standard algorithms for addition and subtraction by the end of second grade and for multiplication and division by the end of fourth grade.  These two frameworks were heavily influenced by mathematicians (from Stanford in California and Harvard in Massachusetts) and quickly became favorites of math traditionalists.  In both states’ frameworks, the standard algorithm requirements were in direct opposition to the reform-oriented frameworks that preceded them—in which standard algorithms were barely mentioned and alternative algorithms or “strategies” were encouraged. 

 

Now that the CCSS-M has replaced these two frameworks, the requirement for knowing the standard algorithms in California and Massachusetts slips from third or fourth grade all the way to sixth grade.  That’s what reformers get in the compromise.  They are given a green light to continue teaching alternative algorithms, as long as the algorithms are consistent with teaching place value and properties of arithmetic.  But the standard algorithm is the only one students are required to learn.  And that exclusivity is intended to please the traditionalists.

 

I agree with Garelick that the compromise leads to problems.  In a 2013 Chalkboard post, I described a first grade math program in which parents were explicitly requested not to teach the standard algorithm for addition when helping their children at home.  The students were being taught how to represent addition with drawings that clustered objects into groups of ten.  The exercises were both time consuming and tedious.  When the parents met with the school principal to discuss the matter, the principal told them that the math program was following the Common Core by promoting deeper learning.  The parents withdrew their child from the school and enrolled him in private school.

 

The value of standard algorithms is that they are efficient and packed with mathematics.  Once students have mastered single-digit operations and the meaning of place value, the standard algorithms reveal to students that they can take procedures that they already know work well with one- and two-digit numbers, and by applying them over and over again, solve problems with large numbers.  Traditionalists and reformers have different goals.  Reformers believe exposure to several algorithms encourages flexible thinking and the ability to draw on multiple strategies for solving problems.  Traditionalists believe that a bigger problem than students learning too few algorithms is that too few students learn even one algorithm.

 

I have been a critic of the math reform movement since I taught in the 1980s.  But some of their complaints have merit.  All too often, instruction on standard algorithms has left out meaning.  As Karen C. Fuson and Sybilla Beckmann point out, “an unfortunate dichotomy” emerged in math instruction: teachers taught “strategies” that implied understanding and “algorithms” that implied procedural steps that were to be memorized.  Michael Battista’s research has provided many instances of students clinging to algorithms without understanding.  He gives an example of a student who has not quite mastered the standard algorithm for addition and makes numerous errors on a worksheet.  On one item, for example, the student forgets to carry and calculates that 19 + 6 = 15.  In a post-worksheet interview, the student counts 6 units from 19 and arrives at 25.  Despite the obvious discrepancy—(25 is not 15, the student agrees)—he declares that his answers on the worksheet must be correct because the algorithm he used “always works.”[iii] 

 

Math reformers rightfully argue that blind faith in procedure has no place in a thinking mathematical classroom. Who can disagree with that?  Students should be able to evaluate the validity of answers, regardless of the procedures used, and propose alternative solutions.  Standard algorithms are tools to help them do that, but students must be able to apply them, not in a robotic way, but with understanding.

 

Conclusion

Let’s return to Carroll’s model of time and learning.  I conclude by making two points—one about curriculum and instruction, the other about implementation.

In the study of numbers, a coherent K-12 math curriculum, similar to that of the previous California and Massachusetts frameworks, can be sketched in a few short sentences.  Addition with whole numbers (including the standard algorithm) is taught in first grade, subtraction in second grade, multiplication in third grade, and division in fourth grade.  Thus, the study of whole number arithmetic is completed by the end of fourth grade.  Grades five through seven focus on rational numbers (fractions, decimals, percentages), and grades eight through twelve study advanced mathematics.  Proficiency is sought along three dimensions:  1) fluency with calculations, 2) conceptual understanding, 3) ability to solve problems.

Placing the CCSS-M standard for knowing the standard algorithms of addition and subtraction in fourth grade delays this progression by two years.  Placing the standard for the division algorithm in sixth grade continues the two-year delay.   For many fourth graders, time spent working on addition and subtraction will be wasted time.  They already have a firm understanding of addition and subtraction.  The same thing for many sixth graders—time devoted to the division algorithm will be wasted time that should be devoted to the study of rational numbers.  The numerator in Carroll’s instructional time model will be greater than the denominator, indicating the inefficient allocation of time to instruction.

As Jason Zimba points out, not everyone agrees on when the standard algorithms should be taught, the alternative algorithms that should be taught, the manner in which any algorithm should be taught, or the amount of instructional time that should be spent on computational procedures.  Such decisions are made by local educators.  Variation in these decisions will introduce variation in the implementation of the math standards.  It is true that standards, any standards, cannot control implementation, especially the twists and turns in how they are interpreted by educators and brought to life in classroom instruction.  But in this case, the standards themselves are responsible for the myriad approaches, many unproductive, that we are sure to see as schools teach various algorithms under the Common Core.


[i] Tracking, ability grouping, differentiated learning, programmed learning, individualized instruction, and personalized learning (including today’s flipped classrooms) are all attempts to solve the challenge of student heterogeneity.  

[ii] An earlier version of this post incorrectly stated that the California framework required that students know the standard algorithms for all four operations by the end of third grade. I regret the error.

[iii] Michael T. Battista (2001).  “Research and Reform in Mathematics Education,” pp. 32-84 in The Great Curriculum Debate: How Should We Teach Reading and Math? (T. Loveless, ed., Brookings Instiution Press).

Authors

     
 
 




or

CNN’s misleading story on homework


Last week, CNN ran a back-to-school story on homework with the headline, “Kids Have Three Times Too Much Homework, Study Finds; What’s the Cost?” Homework is an important topic, especially for parents, but unfortunately, CNN’s story misleads rather than informs. The headline suggests American parents should be alarmed because their kids have too much homework. Should they? No, CNN has ignored the best evidence on that question, which suggests the opposite. The story relies on the results of one recent study of homework—a study that is limited in what it can tell us, mostly because of its research design. But CNN even gets its main findings wrong. The study suggests most students have too little homework, not too much.

The Study

The study that piqued CNN’s interest was conducted during four months (two in the spring and two in the fall) in Providence, Rhode Island. About 1,200 parents completed a survey about their children’s homework while waiting in 27 pediatricians’ offices. Is the sample representative of all parents in the U.S.? Probably not. Certainly CNN should have been a bit leery of portraying the results of a survey conducted in a single American city—any city—as evidence applying to a broader audience. More importantly, viewers are never told of the study’s significant limitations: that the data come from a survey conducted in only one city—in pediatricians’ offices by a self-selected sample of respondents.

The survey’s sampling design is a huge problem. Because the sample is non-random there is no way of knowing if the results can be extrapolated to a larger population—even to families in Providence itself. Close to a third of respondents chose to complete the survey in Spanish. Enrollment in English Language programs in the Providence district comprises about 22 percent of students. About one-fourth (26 percent) of survey respondents reported having one child in the family. According to the 2010 Census, the proportion of families nationwide with one child is much higher, at 43 percent.[i] The survey is skewed towards large, Spanish-speaking families. Their experience with homework could be unique, especially if young children in these families are learning English for the first time at school.

The survey was completed by parents who probably had a sick child as they were waiting to see a pediatrician. That’s a stressful setting. The response rate to the survey is not reported, so we don’t know how many parents visiting those offices chose not to fill out the survey. If the typical pediatrician sees 100 unique patients per month, in a four month span the survey may have been offered to more than ten thousand parents in the 27 offices. The survey respondents, then, would be a tiny slice, 10 to 15 percent, of those eligible to respond. We also don’t know the public-private school break out of the respondents, or how many were sending their children to charter schools. It would be interesting to see how many parents willingly send their children to schools with a heavy homework load.

I wish the CNN team responsible for this story had run the data by some of CNN’s political pollsters. Alarm bells surely would have gone off. The hazards of accepting a self-selected, demographically-skewed survey sample as representative of the general population are well known. Modern political polling—and its reliance on random samples—grew from an infamous mishap in 1936. A popular national magazine, the Literary Digest, distributed 10 million post cards for its readers to return as “ballots” indicating who they would vote for in the 1936 race for president. More than two million post cards were returned! A week before the election, the magazine confidently predicted that Alf Landon, the Republican challenger from Kansas, would defeat Franklin Roosevelt, the Democratic incumbent, by a huge margin: 57 percent to 43 percent. In fact, when the real election was held, the opposite occurred: Roosevelt won more than 60% of the popular vote and defeated Landon in a landslide. Pollsters learned that self-selected samples should be viewed warily. The magazine’s readership was disproportionately Republican to begin with, and sometimes disgruntled subjects are more likely to respond to a survey, no matter the topic, than the satisfied.

Here’s a very simple question: In its next poll on the 2016 presidential race, would CNN report the results of a survey of self-selected respondents in 27 pediatricians’ offices in Providence, Rhode Island as representative of national sentiment? Of course not. Then, please, CNN, don’t do so with education topics.

The Providence Study’s Findings

Let’s set aside methodological concerns and turn to CNN’s characterization of the survey’s findings. Did the study really show that most kids have too much homework? No, the headline that “Kids Have Three Times Too Much Homework” is not even an accurate description of the study’s findings. CNN’s on air coverage extended the misinformation. The online video of the coverage is tagged “Study: Your Kids Are Doing Too Much Homework.” The first caption that viewers see is “Study Says Kids Getting Way Too Much Homework.” All of these statements are misleading.

In the published version of the Providence study, the researchers plotted the average amount of time spent on homework by students’ grade.[ii] They then compared those averages to a “10 minutes per-grade” guideline that serves as an indicator of the “right” amount of homework. I have attempted to replicate the data here in table form (they were originally reported in a line graph) to make that comparison easier.[iii]

Contrary to CNN’s reporting, the data suggest—based on the ten minute per-grade rule—that most kids in this study have too little homework, not too much. Beginning in fourth grade, the average time spent on homework falls short of the recommended amount—a gap of only four minutes in fourth grade that steadily widens in later grades.

A more accurate headline would have been, “Study Shows Kids in Nine out of 13 Grades Have Too Little Homework.” It appears high school students (grades 9-12) spend only about half the recommended time on homework. Two hours of nightly homework is recommended for 12th graders. They are, after all, only a year away from college. But according to the Providence survey, their homework load is less than an hour.

So how in the world did CNN come up with the headline “Kids Have Three Times Too Much Homework?” By focusing on grades K-3 and ignoring all other grades. Here’s the reporting:

The study, published Wednesday in The American Journal of Family Therapy, found students in the early elementary school years are getting significantly more homework than is recommended by education leaders, in some cases nearly three times as much homework as is recommended.

 

The standard, endorsed by the National Education Association and the National Parent-Teacher Association, is the so-called "10-minute rule"— 10 minutes per-grade level per-night. That translates into 10 minutes of homework in the first grade, 20 minutes in the second grade, all the way up to 120 minutes for senior year of high school. The NEA and the National PTA do not endorse homework for kindergarten.

 

In the study involving questionnaires filled out by more than 1,100 English and Spanish speaking parents of children in kindergarten through grade 12, researchers found children in the first grade had up to three times the homework load recommended by the NEA and the National PTA.

 

Parents reported first-graders were spending 28 minutes on homework each night versus the recommended 10 minutes. For second-graders, the homework time was nearly 29 minutes, as opposed to the 20 minutes recommended.

 

And kindergartners, their parents said, spent 25 minutes a night on after-school assignments, according to the study

 

CNN focused on the four grades, K-3, in which homework exceeds the ten-minute rule. They ignored more than two-thirds of the grades. Even with this focus, a more accurate headline would have been, “Study Suggests First Graders in Providence, RI Have Three Times Too Much Homework.”

Conclusion

Homework is a controversial topic. People hold differing points of view as to whether there is too much, too little, or just the right amount of homework. That makes it vitally important that the media give accurate information on the empirical dimensions to the debate.  The amount of homework kids should have is subject to debate. But the amount of homework kids actually have is an empirical question. We can debate whether it’s too hot outside, but the actual temperature should be a matter of measurement, not debate. It’s impossible to think of a rational debate that can possibly ensue on the homework issue without knowing the empirical status quo in regards to time. Imagine someone beginning a debate by saying, “I am arguing that kids have too much [substitute “too little” here for the pro-homework side] homework but I must admit that I have no idea how much they currently have.”

Data from the National Assessment of Educational Progress (NAEP) provide the best evidence we have on the amount of homework that kids have. NAEP’s sampling design allows us to make inferences about national trends, and the Long-Term Trend (LTT) NAEP offers data on homework since 1984. The latest LTT NAEP results (2012) indicate that the vast majority of nine-year-olds (83 percent) have less than an hour of homework each night. There has been an apparent uptick in the homework load, however, as 35 percent reported no homework in 1984, and only 22 percent reported no homework in 2012. MET Life also periodically surveys a representative sample of students, parents, and teachers on the homework issue. In the 2007 results, a majority of parents (52 percent) of elementary grade students (grades 3-6 in the MET survey) estimated their children had 30 minutes or less of homework.

The MET Life survey found that parents have an overwhelmingly positive view of the amount of homework their children are assigned. Nine out of ten parents responded that homework offers the opportunity to talk and spend time with their children, and most do not see homework as interfering with family time or as a major source of familial stress. Minority parents, in particular, reported believing homework is beneficial for students’ success at school and in the future.[iv]

That said, just as there were indeed Alf Landon voters in 1936, there are indeed children for whom homework is a struggle. Some bring home more than they can finish in a reasonable amount of time. A complication for researchers of elementary age children is that the same students who have difficulty completing homework may have other challenges—difficulties with reading, low achievement, and poor grades in school.[v] Parents who question the value of homework often have a host of complaints about their child’s school. It is difficult for researchers to untangle all of these factors and determine, in the instances where there are tensions, whether homework is the real cause. To their credit, the researchers who conducted the Providence study are aware of these constraints and present a number of hypotheses warranting further study with a research design supporting causal inferencing. That’s the value of this research, not CNN’s misleading reporting of the findings.


[i] Calculated from data in Table 64, U.S. Census Bureau, Statistical Abstract of the United States: 2012, page 56. http://www.census.gov/compendia/statab/2012/tables/12s0064.pdf.

[ii] The mean sample size for each grade is reported as 7.7 percent (or 90 students).  Confidence intervals for each grade estimate are not reported.

[iii] The data in Table I are estimates (by sight) from a line graph incremented in five percentage point intervals.

[iv] Met Life, Met Life Survey of the American Teacher: The Homework Experience, November 13, 2007, pp. 15.

[v] Among high school students, the bias probably leans in the opposite direction: high achievers load up on AP, IB, and other courses that assign more homework.

Authors

     
 
 




or

No, the sky is not falling: Interpreting the latest SAT scores


Earlier this month, the College Board released SAT scores for the high school graduating class of 2015. Both math and reading scores declined from 2014, continuing a steady downward trend that has been in place for the past decade. Pundits of contrasting political stripes seized on the scores to bolster their political agendas. Michael Petrilli of the Fordham Foundation argued that falling SAT scores show that high schools need more reform, presumably those his organization supports, in particular, charter schools and accountability.* For Carol Burris of the Network for Public Education, the declining scores were evidence of the failure of polices her organization opposes, namely, Common Core, No Child Left Behind, and accountability.

Petrilli and Burris are both misusing SAT scores. The SAT is not designed to measure national achievement; the score losses from 2014 were miniscule; and most of the declines are probably the result of demographic changes in the SAT population. Let’s examine each of these points in greater detail.

The SAT is not designed to measure national achievement

It never was. The SAT was originally meant to measure a student’s aptitude for college independent of that student’s exposure to a particular curriculum. The test’s founders believed that gauging aptitude, rather than achievement, would serve the cause of fairness. A bright student from a high school in rural Nebraska or the mountains of West Virginia, they held, should have the same shot at attending elite universities as a student from an Eastern prep school, despite not having been exposed to the great literature and higher mathematics taught at prep schools. The SAT would measure reasoning and analytical skills, not the mastery of any particular body of knowledge. Its scores would level the playing field in terms of curricular exposure while providing a reasonable estimate of an individual’s probability of success in college.

Note that even in this capacity, the scores never suffice alone; they are only used to make admissions decisions by colleges and universities, including such luminaries as Harvard and Stanford, in combination with a lot of other information—grade point averages, curricular resumes, essays, reference letters, extra-curricular activities—all of which constitute a student’s complete application.

Today’s SAT has moved towards being a content-oriented test, but not entirely. Next year, the College Board will introduce a revised SAT to more closely reflect high school curricula. Even then, SAT scores should not be used to make judgements about U.S. high school performance, whether it’s a single high school, a state’s high schools, or all of the high schools in the country. The SAT sample is self-selected. In 2015, it only included about one-half of the nation’s high school graduates: 1.7 million out of approximately 3.3 million total. And that’s about one-ninth of approximately 16 million high school students.  Generalizing SAT scores to these larger populations violates a basic rule of social science. The College Board issues a warning when it releases SAT scores: “Since the population of test takers is self-selected, using aggregate SAT scores to compare or evaluate teachers, schools, districts, states, or other educational units is not valid, and the College Board strongly discourages such uses.”  

TIME’s coverage of the SAT release included a statement by Andrew Ho of Harvard University, who succinctly makes the point: “I think SAT and ACT are tests with important purposes, but measuring overall national educational progress is not one of them.”

The score changes from 2014 were miniscule

SAT scores changed very little from 2014 to 2015. Reading scores dropped from 497 to 495. Math scores also fell two points, from 513 to 511. Both declines are equal to about 0.017 standard deviations (SD).[i] To illustrate how small these changes truly are, let’s examine a metric I have used previously in discussing test scores. The average American male is 5’10” in height with a SD of about 3 inches. A 0.017 SD change in height is equal to about 1/20 of an inch (0.051). Do you really think you’d notice a difference in the height of two men standing next to each other if they only differed by 1/20th of an inch? You wouldn’t. Similarly, the change in SAT scores from 2014 to 2015 is trivial.[ii]

A more serious concern is the SAT trend over the past decade. Since 2005, reading scores are down 13 points, from 508 to 495, and math scores are down nine points, from 520 to 511. These are equivalent to declines of 0.12 SD for reading and 0.08 SD for math.[iii] Representing changes that have accumulated over a decade, these losses are still quite small. In the Washington Post, Michael Petrilli asked “why is education reform hitting a brick wall in high school?” He also stated that “you see this in all kinds of evidence.”

You do not see a decline in the best evidence, the National Assessment of Educational Progress (NAEP). Contrary to the SAT, NAEP is designed to monitor national achievement. Its test scores are based on a random sampling design, meaning that the scores can be construed as representative of U.S. students. NAEP administers two different tests to high school age students, the long term trend (LTT NAEP), given to 17-year-olds, and the main NAEP, given to twelfth graders.

Table 1 compares the past ten years’ change in test scores of the SAT with changes in NAEP.[iv] The long term trend NAEP was not administered in 2005 or 2015, so the closest years it was given are shown. The NAEP tests show high school students making small gains over the past decade. They do not confirm the losses on the SAT.

Table 1. Comparison of changes in SAT, Main NAEP (12th grade), and LTT NAEP (17-year-olds) scores. Changes expressed as SD units of base year.

SAT

2005-2015

Main NAEP

2005-2015

LTT NAEP

2004-2012

Reading

-0.12*

+.05*

+.09*

Math

-0.08*

+.09*

+.03

 *p<.05

Petrilli raised another concern related to NAEP scores by examining cohort trends in NAEP scores. The trend for the 17-year-old cohort of 2012, for example, can be constructed by using the scores of 13-year-olds in 2008 and 9-year-olds in 2004. By tracking NAEP changes over time in this manner, one can get a rough idea of a particular cohort’s achievement as students grow older and proceed through the school system. Examining three cohorts, Fordham’s analysis shows that the gains between ages 13 and 17 are about half as large as those registered between ages nine and 13. Kids gain more on NAEP when they are younger than when they are older.

There is nothing new here. NAEP scholars have been aware of this phenomenon for a long time. Fordham points to particular elements of education reform that it favors—charter schools, vouchers, and accountability—as the probable cause. It is true that those reforms more likely target elementary and middle schools than high schools. But the research literature on age discrepancies in NAEP gains (which is not cited in the Fordham analysis) renders doubtful the thesis that education policies are responsible for the phenomenon.[v]

Whether high school age students try as hard as they could on NAEP has been pointed to as one explanation. A 1996 analysis of NAEP answer sheets found that 25-to-30 percent of twelfth graders displayed off-task test behaviors—doodling, leaving items blank—compared to 13 percent of eighth graders and six percent of fourth graders. A 2004 national commission on the twelfth grade NAEP recommended incentives (scholarships, certificates, letters of recognition from the President) to boost high school students’ motivation to do well on NAEP. Why would high school seniors or juniors take NAEP seriously when this low stakes test is taken in the midst of taking SAT or ACT tests for college admission, end of course exams that affect high school GPA, AP tests that can affect placement in college courses, state accountability tests that can lead to their schools being deemed a success or failure, and high school exit exams that must be passed to graduate?[vi]

Other possible explanations for the phenomenon are: 1) differences in the scales between the ages tested on LTT NAEP (in other words, a one-point gain on the scale between ages nine and 13 may not represent the same amount of learning as a one-point gain between ages 13 and 17); 2) different rates of participation in NAEP among elementary, middle, and high schools;[vii] and 3) social trends that affect all high school students, not just those in public schools. The third possibility can be explored by analyzing trends for students attending private schools. If Fordham had disaggregated the NAEP data by public and private schools (the scores of Catholic school students are available), it would have found that the pattern among private school students is similar—younger students gain more than older students on NAEP. That similarity casts doubt on the notion that policies governing public schools are responsible for the smaller gains among older students.[viii]

Changes in the SAT population

Writing in the Washington Post, Carol Burris addresses the question of whether demographic changes have influenced the decline in SAT scores. She concludes that they have not, and in particular, she concludes that the growing proportion of students receiving exam fee waivers has probably not affected scores. She bases that conclusion on an analysis of SAT participation disaggregated by level of family income. Burris notes that the percentage of SAT takers has been stable across income groups in recent years. That criterion is not trustworthy. About 39 percent of students in 2015 declined to provide information on family income. The 61 percent that answered the family income question are probably skewed against low-income students who are on fee waivers (the assumption being that they may feel uncomfortable answering a question about family income).[ix] Don’t forget that the SAT population as a whole is a self-selected sample. A self-selected subsample from a self-selected sample tells us even less than the original sample, which told us almost nothing.

The fee waiver share of SAT takers increased from 21 percent in 2011 to 25 percent in 2015. The simple fact that fee waivers serve low-income families, whose children tend to be lower-scoring SAT takers, is important, but not the whole story here. Students from disadvantaged families have always taken the SAT. But they paid for it themselves. If an additional increment of disadvantaged families take the SAT because they don’t have to pay for it, it is important to consider whether the new entrants to the pool of SAT test takers possess unmeasured characteristics that correlate with achievement—beyond the effect already attributed to socioeconomic status.

Robert Kelchen, an assistant professor of higher education at Seton Hall University, calculated the effect on national SAT scores of just three jurisdictions (Washington, DC, Delaware, and Idaho) adopting policies of mandatory SAT testing paid for by the state. He estimated that these policies explain about 21 percent of the nationwide decline in test scores between 2011 and 2015. He also notes that a more thorough analysis, incorporating fee waivers of other states and districts, would surely boost that figure. Fee waivers in two dozen Texas school districts, for example, are granted to all juniors and seniors in high school. And all students in those districts (including Dallas and Fort Worth) are required to take the SAT beginning in the junior year. Such universal testing policies can increase access and serve the cause of equity, but they will also, at least for a while, lead to a decline in SAT scores.

Here, I offer my own back of the envelope calculation of the relationship of demographic changes with SAT scores. The College Board reports test scores and participation rates for nine racial and ethnic groups.[x] These data are preferable to family income because a) almost all students answer the race/ethnicity question (only four percent are non-responses versus 39 percent for family income), and b) it seems a safe assumption that students are more likely to know their race or ethnicity compared to their family’s income.

The question tackled in Table 2 is this: how much would the national SAT scores have changed from 2005 to 2015 if the scores of each racial/ethnic group stayed exactly the same as in 2005, but each group’s proportion of the total population were allowed to vary? In other words, the scores are fixed at the 2005 level for each group—no change. The SAT national scores are then recalculated using the 2015 proportions that each group represented in the national population.

Table 2. SAT Scores and Demographic Changes in the SAT Population (2005-2015)

Projected Change Based on Change in Proportions

Actual Change

Projected Change as Percentage of Actual Change

Reading

-9

-13

69%

Math

-7

-9

78%

The data suggest that two-thirds to three-quarters of the SAT score decline from 2005 to 2015 is associated with demographic changes in the test-taking population. The analysis is admittedly crude. The relationships are correlational, not causal. The race/ethnicity categories are surely serving as proxies for a bundle of other characteristics affecting SAT scores, some unobserved and others (e.g., family income, parental education, language status, class rank) that are included in the SAT questionnaire but produce data difficult to interpret.

Conclusion

Using an annual decline in SAT scores to indict high schools is bogus. The SAT should not be used to measure national achievement. SAT changes from 2014-2015 are tiny. The downward trend over the past decade represents a larger decline in SAT scores, but one that is still small in magnitude and correlated with changes in the SAT test-taking population.

In contrast to SAT scores, NAEP scores, which are designed to monitor national achievement, report slight gains for 17-year-olds over the past ten years. It is true that LTT NAEP gains are larger among students from ages nine to 13 than from ages 13 to 17, but research has uncovered several plausible explanations for why that occurs. The public should exercise great caution in accepting the findings of test score analyses. Test scores are often misinterpreted to promote political agendas, and much of the alarmist rhetoric provoked by small declines in scores is unjustified.


* In fairness to Petrilli, he acknowledges in his post, “The SATs aren’t even the best gauge—not all students take them, and those who do are hardly representative.”


[i] The 2014 SD for both SAT reading and math was 115.

[ii] A substantively trivial change may nevertheless reach statistical significance with large samples.

[iii] The 2005 SDs were 113 for reading and 115 for math.

[iv] Throughout this post, SAT’s Critical Reading (formerly, the SAT-Verbal section) is referred to as “reading.” I only examine SAT reading and math scores to allow for comparisons to NAEP. Moreover, SAT’s writing section will be dropped in 2016.

[v] The larger gains by younger vs. older students on NAEP is explored in greater detail in the 2006 Brown Center Report, pp. 10-11.

[vi] If these influences have remained stable over time, they would not affect trends in NAEP. It is hard to believe, however, that high stakes tests carry the same importance today to high school students as they did in the past.

[vii] The 2004 blue ribbon commission report on the twelfth grade NAEP reported that by 2002 participation rates had fallen to 55 percent. That compares to 76 percent at eighth grade and 80 percent at fourth grade. Participation rates refer to the originally drawn sample, before replacements are made. NAEP is conducted with two stage sampling—schools first, then students within schools—meaning that the low participation rate is a product of both depressed school (82 percent) and student (77 percent) participation. See page 8 of: http://www.nagb.org/content/nagb/assets/documents/publications/12_gr_commission_rpt.pdf

[viii] Private school data are spotty on the LTT NAEP because of problems meeting reporting standards, but analyses identical to Fordham’s can be conducted on Catholic school students for the 2008 and 2012 cohorts of 17-year-olds.

[ix] The non-response rate in 2005 was 33 percent.

[x] The nine response categories are: American Indian or Alaska Native; Asian, Asian American, or Pacific Islander; Black or African American; Mexican or Mexican American; Puerto Rican; Other Hispanic, Latino, or Latin American; White; Other; and No Response.

Authors

      
 
 




or

Has Common Core influenced instruction?


The release of 2015 NAEP scores showed national achievement stalling out or falling in reading and mathematics.  The poor results triggered speculation about the effect of Common Core State Standards (CCSS), the controversial set of standards adopted by more than 40 states since 2010.  Critics of Common Core tended to blame the standards for the disappointing scores.  Its defenders said it was too early to assess CCSS’s impact and that implementation would take many years to unfold. William J. Bushaw, executive director of the National assessment Governing Board, cited “curricular uncertainty” as the culprit.  Secretary of Education Arne Duncan argued that new standards typically experience an “implementation dip” in the early days of teachers actually trying to implement them in classrooms.

In the rush to argue whether CCSS has positively or negatively affected American education, these speculations are vague as to how the standards boosted or depressed learning.  They don’t provide a description of the mechanisms, the connective tissue, linking standards to learning.  Bushaw and Duncan come the closest, arguing that the newness of CCSS has created curriculum confusion, but the explanation falls flat for a couple of reasons.  Curriculum in the three states that adopted the standards, rescinded them, then adopted something else should be extremely confused.  But the 2013-2015 NAEP changes for Indiana, Oklahoma, and South Carolina were a little bit better than the national figures, not worse.[i]  In addition, surveys of math teachers conducted in the first year or two after the standards were adopted found that:  a) most teachers liked them, and b) most teachers said they were already teaching in a manner consistent with CCSS.[ii]  They didn’t mention uncertainty.  Recent polls, however, show those positive sentiments eroding. Mr. Bushaw might be mistaking disenchantment for uncertainty.[iii] 

For teachers, the novelty of CCSS should be dissipating.  Common Core’s advocates placed great faith in professional development to implement the standards.  Well, there’s been a lot of it.  Over the past few years, millions of teacher-hours have been devoted to CCSS training.  Whether all that activity had a lasting impact is questionable.  Randomized control trials have been conducted of two large-scale professional development programs.  Interestingly, although they pre-date CCSS, both programs attempted to promote the kind of “instructional shifts” championed by CCSS advocates. The studies found that if teacher behaviors change from such training—and that’s not a certainty—the changes fade after a year or two.  Indeed, that’s a pattern evident in many studies of educational change: a pop at the beginning, followed by fade out.  

My own work analyzing NAEP scores in 2011 and 2013 led me to conclude that the early implementation of CCSS was producing small, positive changes in NAEP.[iv]  I warned that those gains “may be as good as it gets” for CCSS.[v]  Advocates of the standards hope that CCSS will eventually produce long term positive effects as educators learn how to use them.  That’s a reasonable hypothesis.  But it should now be apparent that a counter-hypothesis has equal standing: any positive effect of adopting Common Core may have already occurred.  To be precise, the proposition is this: any effects from adopting new standards and attempting to change curriculum and instruction to conform to those standards occur early and are small in magnitude.   Policymakers still have a couple of arrows left in the implementation quiver, accountability being the most powerful.  Accountability systems have essentially been put on hold as NCLB sputtered to an end and new CCSS tests appeared on the scene.  So the CCSS story isn’t over.  Both hypotheses remain plausible. 

Reading Instruction in 4th and 8th Grades

Back to the mechanisms, the connective tissue binding standards to classrooms.  The 2015 Brown Center Report introduced one possible classroom effect that is showing up in NAEP data: the relative emphasis teachers place on fiction and nonfiction in reading instruction.  The ink was still drying on new Common Core textbooks when a heated debate broke out about CCSS’s recommendation that informational reading should receive greater attention in classrooms.[vi] 

Fiction has long dominated reading instruction.  That dominance appears to be waning.



After 2011, something seems to have happened.  I am more persuaded that Common Core influenced the recent shift towards nonfiction than I am that Common Core has significantly affected student achievement—for either good or ill.   But causality is difficult to confirm or to reject with NAEP data, and trustworthy efforts to do so require a more sophisticated analysis than presented here.

Four lessons from previous education reforms

Nevertheless, the figures above reinforce important lessons that have been learned from previous top-down reforms.  Let’s conclude with four:

1.  There seems to be evidence that CCSS is having an impact on the content of reading instruction, moving from the dominance of fiction over nonfiction to near parity in emphasis.  Unfortunately, as Mark Bauerlein and Sandra Stotsky have pointed out, there is scant evidence that such a shift improves children’s reading.[vii]

2.  Reading more nonfiction does not necessarily mean that students will be reading higher quality texts, even if the materials are aligned with CCSS.   The Core Knowledge Foundation and the Partnership for 21st Century Learning, both supporters of Common Core, have very different ideas on the texts schools should use with the CCSS.[viii] The two organizations advocate for curricula having almost nothing in common.

3.  When it comes to the study of implementing education reforms, analysts tend to focus on the formal channels of implementation and the standard tools of public administration—for example, intergovernmental hand-offs (federal to state to district to school), alignment of curriculum, assessment and other components of the reform, professional development, getting incentives right, and accountability mechanisms.  Analysts often ignore informal channels, and some of those avenues funnel directly into schools and classrooms.[ix]  Politics and the media are often overlooked.  Principals and teachers are aware of the politics swirling around K-12 school reform.  Many educators undoubtedly formed their own opinions on CCSS and the fiction vs. nonfiction debate before the standard managerial efforts touched them.

4.  Local educators whose jobs are related to curriculum almost certainly have ideas about what constitutes good curriculum.  It’s part of the profession.  Major top-down reforms such as CCSS provide local proponents with political cover to pursue curricular and instructional changes that may be politically unpopular in the local jurisdiction.  Anyone who believes nonfiction should have a more prominent role in the K-12 curriculum was handed a lever for promoting his or her beliefs by CCSS. I’ve previously called these the “dog whistles” of top-down curriculum reform, subtle signals that give local advocates license to promote unpopular positions on controversial issues.


[i] In the four subject-grade combinations assessed by NAEP (reading and math at 4th and 8th grades), IN, SC, and OK all exceeded national gains on at least three out of four tests from 2013-2015.  NAEP data can be analyzed using the NAEP Data Explorer: http://nces.ed.gov/nationsreportcard/naepdata/.

[ii] In a Michigan State survey of teachers conducted in 2011, 77 percent of teachers, after being presented with selected CCSS standards for their grade, thought they were the same as their state’s former standards.  http://education.msu.edu/epc/publications/documents/WP33ImplementingtheCommonCoreStandardsforMathematicsWhatWeknowaboutTeacherofMathematicsin41S.pdf

[iii] In the Education Next surveys, 76 percent of teachers supported Common Core in 2013 and 12 percent opposed.  In 2015, 40 percent supported and 50 percent opposed. http://educationnext.org/2015-ednext-poll-school-reform-opt-out-common-core-unions.

[iv] I used variation in state implementation of CCSS to assign the states to three groups and analyzed differences of the groups’ NAEP gains

[v] http://www.brookings.edu/~/media/research/files/reports/2015/03/bcr/2015-brown-center-report_final.pdf

[vi] http://www.edweek.org/ew/articles/2012/11/14/12cc-nonfiction.h32.html?qs=common+core+fiction

[vii] Mark Bauerlein and Sandra Stotsky (2012). “How Common Core’s ELA Standards Place College Readiness at Risk.” A Pioneer Institute White Paper.

[viii] Compare the P21 Common Core Toolkit (http://www.p21.org/our-work/resources/for-educators/1005-p21-common-core-toolkit) with Core Knowledge ELA Sequence (http://www.coreknowledge.org/ccss).  It is hard to believe that they are talking about the same standards in references to CCSS.

[ix] I elaborate on this point in Chapter 8, “The Fate of Reform,” in The Tracking Wars: State Reform Meets School Policy (Brookings Institution Press, 1999).


Authors

Image Source: © Patrick Fallon / Reuters
      
 
 




or

2016 Brown Center Report on American Education: How Well Are American Students Learning?


      
 
 




or

Reading and math in the Common Core era


      
 
 




or

Brookings Live: Reading and math in the Common Core era


Event Information

March 28, 2016
4:00 PM - 4:30 PM EDT

Online Only
Live Webcast

And more from the Brown Center Report on American Education


The Common Core State Standards have been adopted as the reading and math standards in more than forty states, but are the frontline implementers—teachers and principals—enacting them? As part of the 2016 Brown Center Report on American Education, Tom Loveless examines the degree to which CCSS recommendations have penetrated schools and classrooms. He specifically looks at the impact the standards have had on the emphasis of non-fiction vs. fiction texts in reading, and on enrollment in advanced courses in mathematics.

On March 28, the Brown Center hosted an online discussion of Loveless's findings, moderated by the Urban Institute's Matthew Chingos.  In addition to the Common Core, Loveless and Chingos also discussed the other sections of the three-part Brown Center Report, including a study of the relationship between ability group tracking in eighth grade and AP performance in high school.

Watch the archived video below.

Spreecast is the social video platform that connects people.
Check out Reading and Math in the Common Core Era on Spreecast.

      
 
 




or

Common Core’s major political challenges for the remainder of 2016


The 2016 Brown Center Report (BCR), which was published last week, presented a study of Common Core State Standards (CCSS).   In this post, I’d like to elaborate on a topic touched upon but deserving further attention: what to expect in Common Core’s immediate political future. I discuss four key challenges that CCSS will face between now and the end of the year.

Let’s set the stage for the discussion.  The BCR study produced two major findings.  First, several changes that CCSS promotes in curriculum and instruction appear to be taking place at the school level.  Second, states that adopted CCSS and have been implementing the standards have registered about the same gains and losses on NAEP as states that either adopted and rescinded CCSS or never adopted CCSS in the first place.  These are merely associations and cannot be interpreted as saying anything about CCSS’s causal impact.  Politically, that doesn’t really matter. The big story is that NAEP scores have been flat for six years, an unprecedented stagnation in national achievement that states have experienced regardless of their stance on CCSS.  Yes, it’s unfair, but CCSS is paying a political price for those disappointing NAEP scores.  No clear NAEP differences have emerged between CCSS adopters and non-adopters to reverse that political dynamic.

"Yes, it’s unfair, but CCSS is paying a political price for those disappointing NAEP scores. No clear NAEP differences have emerged between CCSS adopters and non-adopters to reverse that political dynamic."

TIMSS and PISA scores in November-December

NAEP has two separate test programs.  The scores released in 2015 were for the main NAEP, which began in 1990.  The long term trend (LTT) NAEP, a different test that was first given in 1969, has not been administered since 2012.  It was scheduled to be given in 2016, but was cancelled due to budgetary constraints.  It was next scheduled for 2020, but last fall officials cancelled that round of testing as well, meaning that the LTT NAEP won’t be given again until 2024.  

With the LTT NAEP on hold, only two international assessments will soon offer estimates of U.S. achievement that, like the two NAEP tests, are based on scientific sampling:  PISA and TIMSS.  Both tests were administered in 2015, and the new scores will be released around the Thanksgiving-Christmas period of 2016.  If PISA and TIMSS confirm the stagnant trend in U.S. achievement, expect CCSS to take another political hit.  America’s performance on international tests engenders a lot of hand wringing anyway, so the reaction to disappointing PISA or TIMSS scores may be even more pronounced than what the disappointing NAEP scores generated.

Is teacher support still declining?

Watch Education Next’s survey on Common Core (usually released in August/September) and pay close attention to teacher support for CCSS.  The trend line has been heading steadily south. In 2013, 76 percent of teachers said they supported CCSS and only 12 percent were opposed.  In 2014, teacher support fell to 43 percent and opposition grew to 37 percent.  In 2015, opponents outnumbered supporters for the first time, 50 percent to 37 percent.  Further erosion of teacher support will indicate that Common Core’s implementation is in trouble at the ground level.  Don’t forget: teachers are the final implementers of standards.

An effort by Common Core supporters to change NAEP

The 2015 NAEP math scores were disappointing.  Watch for an attempt by Common Core supporters to change the NAEP math tests. Michael Cohen, President of Achieve, a prominent pro-CCSS organization, released a statement about the 2015 NAEP scores that included the following: "The National Assessment Governing Board, which oversees NAEP, should carefully review its frameworks and assessments in order to ensure that NAEP is in step with the leadership of the states. It appears that there is a mismatch between NAEP and all states' math standards, no matter if they are common standards or not.” 

Reviewing and potentially revising the NAEP math framework is long overdue.  The last adoption was in 2004.  The argument for changing NAEP to place greater emphasis on number and operations, revisions that would bring NAEP into closer alignment with Common Core, also has merit.  I have a longstanding position on the NAEP math framework. In 2001, I urged the National Assessment Governing Board (NAGB) to reject the draft 2004 framework because it was weak on numbers and operations—and especially weak on assessing student proficiency with whole numbers, fractions, decimals, and percentages.  

Common Core’s math standards are right in line with my 2001 complaint.  Despite my sympathy for Common Core advocates’ position, a change in NAEP should not be made because of Common Core.  In that 2001 testimony, I urged NAGB to end the marriage of NAEP with the 1989 standards of the National Council of Teachers of Mathematics, the math reform document that had guided the main NAEP since its inception.  Reform movements come and go, I argued.  NAGB’s job is to keep NAEP rigorously neutral.  The assessment’s integrity depends upon it.  NAEP was originally intended to function as a measuring stick, not as a PR device for one reform or another.  If NAEP is changed it must be done very carefully and should be rooted in the mathematics children must learn.  The political consequences of it appearing that powerful groups in Washington, DC are changing “The Nation’s Report Card” in order for Common Core to look better will hurt both Common Core and NAEP.

Will Opt Out grow?

Watch the Opt Out movement.  In 2015, several organized groups of parents refused to allow their children to take Common Core tests.  In New York state alone, about 60,000 opted out in 2014, skyrocketing to 200,000 in 2015.  Common Core testing for 2016 begins now and goes through May.  It will be important to see whether Opt Out can expand to other states, grow in numbers, and branch out beyond middle- and upper-income neighborhoods.

Conclusion

Common Core is now several years into implementation.  Supporters have had a difficult time persuading skeptics that any positive results have occurred. The best evidence has been mixed on that question.  CCSS advocates say it is too early to tell, and we’ll just have to wait to see the benefits.  That defense won’t work much longer.  Time is running out.  The political challenges that Common Core faces the remainder of this year may determine whether it survives.

Authors

Image Source: Jim Young / Reuters
      
 
 




or

Three cheers for logrolling: The demise of the Sustainable Growth Rate (SGR)


Editor's note: This post originally appeared in the New England Journal of Medicine's Perspective online series on April 22, 2015.

Congress has finally euthanized the sustainable growth rate formula (SGR). Enacted in 1997 and intended to hold down growth of Medicare spending on physician services, the formula initially worked more or less as intended. Then it began to call for progressively larger and more unrealistic fee cuts — nearly 30% in some years, 21% in 2015. Aware that such cuts would be devastating, Congress repeatedly postponed them, and most observers understood that such cuts would never be implemented. Still, many physicians fretted that the unthinkable might happen.

Now Congress has scrapped the SGR, replacing it with still-embryonic but promising incentives that could catalyze increased efficiency and greater cost control than the old, flawed formula could ever really have done, in a law that includes many other important provisions. How did such a radical change occur?  And why now?

The “how” was logrolling — the trading of votes by legislators in order to pass legislation of interest to each of them. Logrolling has become a dirty word, a much-reviled political practice. But the Medicare Access and CHIP (Children’s Health Insurance Program) Reauthorization Act (MACRA), negotiated by House leaders John Boehner (R-OH) and Nancy Pelosi (D-CA) and their staffs, is a reminder that old-time political horse trading has much to be said for it.

The answer to “why now?” can be found in the technicalities of budget scoring. Under the SGR, Medicare’s physician fees were tied through a complex formula to a target based on caseloads, practice costs, and the gross domestic product. When current spending on physician services exceeded the targets, the formula called for fee cuts to be applied prospectively. Fee cuts that were not implemented were carried forward and added to any future cuts the formula might generate. Because Congress repeatedly deferred cuts, a backlog developed. By 2012, this backlog combined with assumed rapid future growth in Medicare spending caused the Congressional Budget Office (CBO) to estimate the 10-year cost of repealing the SGR at a stunning $316 billion.

For many years, Congress looked the costs of repealing the SGR squarely in the eye — and blinked. The cost of a 1-year delay, as estimated by the CBO, was a tiny fraction of the cost of repeal. So Congress delayed — which is hardly surprising.

But then, something genuinely surprising did happen. The growth of overall health care spending slowed, causing the CBO to slash its estimates of the long-term cost of repealing the SGR. By 2015, the 10-year price of repeal had fallen to $136 billion. Even this number was a figment of budget accounting, since the chance that the fee cuts would ever have been imposed was minuscule. But the smaller number made possible the all-too-rare bipartisan collaboration that produced the legislation that President Barack Obama has just signed.

The core of the law is repeal of the SGR and abandonment of the 21% cut in Medicare physician fees it called for this year. In its place is a new method of paying physicians under Medicare. Some elements are specified in law; some are to be introduced later. The hard-wired elements include annual physician fee updates of 0.5% per year through 2019 and 0% from 2020 through 2025, along with a “merit-based incentive payment system” (MIPS) that will replace current incentive programs that terminate in 2018. The new program will assess performance in four categories: quality of care, resource use, meaningful use of electronic health records, and clinical practice improvement activities. Bonuses and penalties, ranging from +12% to –4% in 2020, and increasing to +27% to –9% for 2022 and later, will be triggered by performance scores in these four areas. The exact content of the MIPS will be specified in rules that the secretary of health and human services is to develop after consultation with physicians and other health care providers.

Higher fees will be available to professionals who work in “alternative payment organizations” that typically will move away from fee-for-service payment, cover multiple services, show that they can limit the growth of spending, and use performance-based methods of compensation. These and other provisions will ramp up pressure on physicians and other providers to move from traditional individual or small-group fee-for-service practices into risk-based multi-specialty settings that are subject to management and oversight more intense than that to which most practitioners are yet accustomed.

Both parties wanted to bury the SGR. But MACRA contains other provisions, unrelated to the SGR, that appeal to discrete segments of each party. Democrats had been seeking a 4-year extension of CHIP, which serves 8 million children and pregnant women. They were running into stiff head winds from conservatives who wanted to scale back the program. MACRA extends CHIP with no cuts but does so for only 2 years.  It also includes a number of other provisions sought by Democrats: a 2-year extension of the Maternal, Infant, and Early Childhood Home Visiting program, plus permanent extensions of the Qualified Individual program, which pays Part B Medicare premiums for people with incomes just over the federal poverty thresholds, and transitional medical assistance, which preserves Medicaid eligibility for up to 1 year after a beneficiary gets a job.

The law also facilitates access to health benefits. MACRA extends for two years states’ authority to enroll applicants for health benefits on the basis of data on income, household size, and other factors gathered when people enroll in other programs such as the Supplemental Nutrition Assistance Program, the National School Lunch Program, Temporary Assistance to Needy Families (“welfare”), or Head Start. It also provides $7.2 billion over the next two years to support community health centers, extending funding established in the Affordable Care Act.

Elements of each party, concerned about budget deficits, wanted provisions to pay for the increased spending. They got some of what they wanted, but not enough to prevent some conservative Republicans in both the Senate and the House from opposing final passage. Many conservatives have long sought to increase the proportion of Medicare Part B costs that are covered by premiums. Most Medicare beneficiaries pay Part B premiums covering 25% of the program’s actuarial value. Relatively high-income beneficiaries pay premiums that cover 35, 50, 65, or 80% of that value, depending on their income. Starting in 2018, MACRA will raise the 50% and 65% premiums to 65% and 80%, respectively, affecting about 2% of Medicare beneficiaries. No single person with an income (in 2015 dollars) below $133,501 or couple with income below $267,001 would be affected initially. MACRA freezes these thresholds through 2019, after which they are indexed for inflation. Under previous law, the thresholds were to have been greatly increased in 2019, reducing the number of high-income Medicare beneficiaries to whom these higher premiums would have applied. (For reference, half of all Medicare beneficiaries currently have incomes below $26,000 a year.)

A second provision bars Medigap plans from covering the Part B deductible, which is now $147. By exposing more people to deductibles, this provision will cause some reduction in Part B spending. Everyone who buys such plans will see reduced premiums; some will face increased out-of-pocket costs. The financial effects either way will be small.

Inflexible adherence to principle contributes to the political gridlock that has plunged rates of public approval of Congress to subfreezing lows. MACRA is a reminder of the virtues of compromise and quiet negotiation. A small group of congressional leaders and their staffs crafted a law that gives something to most members of both parties. Today’s appalling norm of poisonously polarized politics make this instance of political horse trading seem nothing short of miraculous.

Authors

Publication: NEJM
     
 
 




or

Strengthening Medicare for 2030 - A working paper series


The addition of Medicare in 1965 completed a suite of federal programs designed to protect the wealth and health of people reaching older ages in the United States, starting with the Committee on Economic Security of 1934—known today as Social Security. While few would deny Medicare’s important role in improving older and disabled Americans’ financial security and health, many worry about sustaining and strengthening Medicare to finance high-quality, affordable health care for coming generations.

In 1965, average life expectancy for a 65-year-old man and woman was another 13 years and 16 years, respectively. Now, life expectancy for 65-year-olds is 18 years for men and 20 years for women—effectively a four- to five-year increase.

In 2011, the first of 75-million-plus baby boomers became eligible for Medicare. And by 2029, when all of the baby boomers will be 65 or older, the U.S. Census Bureau predicts 20 percent of the U.S. population will be older than 65. Just by virtue of the sheer size of the aging population, Medicare spending growth will accelerate sharply in the coming years.


Estimated Medicare Spending, 2010-2030



Sources: Future Elderly Model (FEM), University of Southern California Leonard D. Schaeffer Center for Health Policy & Economics, U.S. Census Bureau projections, Medicare Current Beneficiary Survey and Centers for Medicare & Medicaid Services.

The Center for Health Policy at Brookings and the USC Leonard D. Schaeffer Center for Health Policy and Economics' half-day forum on the future of Medicare, looked ahead to the year 2030--a year when the youngest baby boomers will be Medicare-eligible-- to explore the changing demographics, health care needs, medical technology costs, and financial resources that will be available to beneficiaries. The working papers below address five critical components of Medicare reform, including: modernizing Medicare's infrastructure, benefit design, marketplace competition, and payment mechanisms.

DISCUSSION PAPERS

  • Health and Health Care of Beneficiaries in 2030, Étienne Gaudette, Bryan Tysinger, Alwyn Cassil and Dana Goldman: This chartbook, prepared by the USC Schaeffer Center, aims to help policymakers understand how Medicare spending and beneficiary demographics will likely change over the next 15 years to help strengthen and sustain the program.
  • Trends in the Well-Being of Aged and their Prospects through 2030, Gary Burtless: This paper offers a survey of trends in old-age poverty, income, inequality, labor market activity, insurance coverage, and health status, and provides a brief discussion of whether the favorable trends of the past half century can continue in the next few decades.
  • The Transformation of Medicare, 2015 to 2030, Henry J. Aaron and Robert Reischauer: This paper discusses how Medicare can be made a better program and how it should look in 2030s using the perspectives of beneficiaries, policymakers and administrators; and that of society at large.
  • Improving Provider Payment in Medicare, Paul Ginsburg and Gail Wilensky: This paper discusses the various alternative payment models currently being implemented in the private sector and elsewhere that can be employed in the Medicare program to preserve quality of care and also reduce costs.

Authors

Publication: The Brookings Institution and the USC Schaeffer Center
     
 
 




or

Strengthening Medicare for 2030


Event Information

June 5, 2015
9:00 AM - 1:00 PM EDT

Falk Auditorium
Brookings Institution
1775 Massachusetts Avenue, N.W.
Washington, DC 20036

Register for the Event

In its 50th year, the Medicare program currently provides health insurance coverage for more than 49 million Americans and accounts for $600 billion in federal spending. With those numbers expected to rise as the baby boomer generation ages, many policy experts consider this impending expansion a major threat to the nation’s economic future and question how it might affect the quality and value of health care for Medicare beneficiaries.

On June 5, the Center for Health Policy at Brookings and the USC Leonard D. Schaeffer Center for Health Policy and Economics hosted a half-day forum on the future of Medicare. Instead of reflecting on historical accomplishments, the event looked ahead to 2030—a time when the youngest Baby Boomers will be Medicare-eligible—and explore the changing demographics, health care needs, medical technology costs, and financial resources available to beneficiaries. The panels focused on modernizing Medicare's infrastructure, benefit design, marketplace competition, and payment mechanisms. The event also included the release of five policy papers from featured panelists.

Please note that presentation slides from USC's Dana Goldman will not be available for download. For more information on findings from his presentation download the working paper available on this page or watch the event video.

Video

Audio

Transcript

Event Materials

     
 
 




or

King v. Burwell: Chalk one up for common sense


The Supreme Court today decided that Congress meant what it said when it enacted the Affordable Care Act (ACA). The ACA requires people in all 50 states to carry health insurance and provided tax credits to help them afford it. To have offered such credits only in the dozen states that set up their own exchanges would have been cruel and unsustainable because premiums for many people would have been unaffordable.

But the law said that such credits could be paid in exchanges ‘established by a state,’ which led some to claim that the credits could not be paid to people enrolled by the federally operated exchange. In his opinion, Chief Justice Roberts euphemistically calls that wording ‘inartful.’ Six Supreme Court justices decided that, read in its entirety, the law provides tax credits in every state, whether the state manages the exchange itself or lets the federal government do it for them.

That decision is unsurprising. More surprising is that the Court agreed to hear the case. When it did so, cases on the same issue were making their ways through four federal circuits. In only one of the four circuits was there a standing decision, and it found that tax credits were available everywhere. It is customary for the Supreme Court to wait to take a case until action in lower courts is complete or two circuits have disagreed. In this situation, the justices, eyeing the electoral calendar, may have preferred to hear the case sooner rather than later to avoid confronting it in the middle of a presidential election.

Whatever the Court’s motives for taking the case, their willingness to hear the case caused supporters of the Affordable Care Act enormous unease. Were the more conservative members of the Court poised to accept an interpretation of the law that ACA supporters found ridiculous but that inartful legislative drafting gave the gloss of plausibility? Judicial demeanor at oral argument was not comforting. A 5-4 decision disallowing payment of tax credits seemed ominously plausible.

Future Challenges for the ACA

The Court’s 6-3 decision ended those fears. The existential threat to health reform from litigation is over. But efforts to undo the Affordable Care Act are not at an end. They will continue in the political sphere. And that is where they should be. ACA opponents know that there is little chance for them to roll back the Affordable Care Act in any fundamental way as long as a Democrat is in the White House. To dismantle the law, they must win the presidency in 2016.

But winning the presidency will not be enough. It would be mid 2017 before ACA opponents could draft and enact legislation to curb the Affordable Care Act and months more before it could take effect. To borrow a metaphor from the military, even if those opposed to the ACA win the presidency, they will have to deal with ‘facts on the ground.’

Well over 30 million Americans will be receiving health insurance under the Affordable Care Act. That will include people who can afford health insurance because of the tax credits the Supreme Court affirmed today. It will include millions more insured through Medicaid in the steadily growing number of states that have agreed to extend Medicaid coverage. It will include the young adult children covered under parental plans because the ACA requires this option.

Insurance companies will have millions more customers because of the ACA. Hospitals will fill more beds because previously uninsured people will be able to afford care and will have fewer unpaid bills generated by people who were uninsured but the hospitals had to admit under previous law. Drug companies and device manufacturers will be enjoying increased sales because of the ACA.

The elderly will have better drug coverage because the ACA has eliminated the notorious ‘donut hole’—the drug expenditures that Medicare previously did not cover.

Those facts will discourage any frontal assault on the ACA, particularly if the rate of increase of health spending remains as well controlled as it has been for the past seven years.

Of course, differences between supporters and opponents of the ACA will not vanish. But those differences will not preclude constructive legislation. Beginning in 2017, the ACA gives states, an opening to propose alternative ways of achieving the goals of the Affordable Care Act, alone on in groups, by alternative means. The law authorizes the president to approve such waivers if they serve the goals of the law. The United States is large and diverse. Use of this authority may help diffuse the bitter acrimony surrounding Obamacare, as my colleague, Stuart Butler, has suggested. At the same time, Obamacare supporters have their own list of changes that they believe would improve the law. At the top of the list is fixing the ‘family glitch,’ a drafting error that unintentionally deprives many families of access to the insurance exchanges and to tax credits that would make insurance affordable.

As Chief Justice Roberts wrote near the end of his opinion of the Court, “In a democracy, the power to make the law rests with those chosen by the people....Congress passed the Affordable Care Act to improve health insurance markets, not to destroy them.” The Supreme Court decision assuring that tax credits are available in all states spares the nation chaos and turmoil. It returns the debate about health care policy to the political arena where it belongs. In so doing, it brings a bit closer the time when the two parties may find it in their interest to sit down and deal with the twin realities of the Affordable Care Act: it is imperfect legislation that needs fixing, and it is decidedly here to stay.

Authors

Image Source: © Jim Tanner / Reuters
     
 
 




or

2016: The most important election since 1932


The 2016 presidential election confronts the U.S. electorate with political choices more fundamental than any since 1964 and possibly since 1932. That statement may strike some as hyperbolic, but the policy differences between the two major parties and the positions of candidates vying for their presidential nominations support this claim.

A victorious Republican candidate would take office backed by a Republican-controlled Congress, possibly with heightened majorities and with the means to deliver on campaign promises. On the other hand, the coattails of a successful Democratic candidate might bring more Democrats to Congress, but that president would almost certainly have to work with a Republican House and, quite possibly, a still Republican Senate. The political wars would continue, but even a president engaged in continuous political trench warfare has the power to get a lot done.

Candidates always promise more than they can deliver and often deliver different policies from those they have promised. Every recent president has been buffeted by external events unanticipated when he took office. But this year, more than in half a century or more, the two parties offer a choice, not an echo. Here is a partial and selective list of key issues to illustrate what is at stake.

Health care 

The Affordable Care Act, known as Obamacare or the ACA, passed both houses of Congress with not a single Republican vote. The five years since enactment of the ACA have not dampened Republican opposition.

The persistence and strength of opposition to the ACA is quite unlike post-enactment reactions to the Social Security Act of 1935 or the 1965 amendments that created Medicare. Both earlier programs were hotly debated and controversial. But a majority of both parties voted for the Social Security Act. A majority of House Republicans and a sizeable minority of Senate Republicans supported Medicare. In both cases, opponents not only became reconciled to the new laws but eventually participated in improving and extending them. Republican members of Congress overwhelmingly supported, and a Republican president endorsed, adding Disability Insurance to the Social Security Act.  In 2003, a Republican president proposed and fought for the addition of a drug benefit to Medicare.

The current situation bears no resemblance to those two situations. Five years after enactment of Obamacare, in contrast, every major candidate for the Republican presidential nomination has called for its repeal and replacement. So have the Republican Speaker of the House of Representatives and Majority Leader in the Senate.  

Just what 'repeal and replace' might look like under a GOP president remains unclear as ACA critics have not agreed on an alternative. Some plans would do away with some of the elements of Obamacare and scale back others. Some proposals would repeal the mandate that people carry insurance, the bar on 'medical underwriting' (a once-routine practice under which insurers vary premiums based on expected use of medical care), or the requirement that insurers sell plans to all potential customers. Other proposals would retain tax credits to help make insurance affordable but reduce their size, or would end rules specifying what 'adequate' insurance plans must cover.

Repeal is hard to imagine if a Democrat wins the presidency in 2016. Even if repeal legislation could overcome a Senate filibuster, a Democratic president would likely veto it and an override would be improbable. 

But a compromise with horse-trading, once routine, might once again become possible. A Democratic president might agree to Republican-sponsored changes to the ACA, such as dropping the requirement that employers of 50 or more workers offer insurance to their employees, if Republicans agreed to changes in the ACA that supporters seek, such as the extension of tax credits to families now barred from them because one member has access to very costly employer-sponsored insurance.

In sum, the 2016 election will determine the future of the most far-reaching social insurance legislation in half a century.

Social Security

Social Security faces a projected long-term gap between what it takes in and what it is scheduled to pay out. Every major Republican candidate has called for cutting benefits below those promised under current law. None has suggested any increase in payroll tax rates. Each Democratic candidate has proposed raising both revenues and benefits. Within those broad outlines, the specific proposals differ.

Most Republican candidates would cut benefits across the board or selectively for high earners. For example, Senator Ted Cruz proposes to link benefits to prices rather than wages, a switch that would reduce Social Security benefits relative to current law by steadily larger amounts: an estimated 29 percent by 2065 and 46 percent by 2090. He would allow younger workers to shift payroll taxes to private accounts. Donald Trump has proposed no cuts in Social Security because, he says, proposing cuts is inconsistent with winning elections and because meeting current statutory commitments is 'honoring a deal.' Trump also favors letting people invest part of their payroll taxes in private securities. He has not explained how he would make up the funding gap that would result if current benefits are honored but revenues to support them are reduced. Senator Marco Rubio has endorsed general benefit cuts, but he has also proposed to increase the minimum benefit. Three Republican candidates have proposed ending payroll taxes for older workers, a step that would add to the projected funding gap.

Democratic candidates, in contrast, would raise benefits, across-the-board or for selected groups—care givers or survivors. They would switch the price index used to adjust benefits for inflation to one that is tailored to consumption of the elderly and that analysts believe would raise benefits more rapidly than the index now in use. All would raise the ceiling on earnings subject to the payroll tax. Two would broaden the payroll tax base.

As these examples indicate, the two parties have quite different visions for Social Security. Major changes, such as those envisioned by some Republican candidates, are not easily realized, however. Before he became president, Ronald Reagan in numerous speeches called for restructuring Social Security. Those statements did not stop him from signing a 1983 law that restored financial balance to the very program against which he had inveighed but with few structural changes. George W. Bush sought to partially privatize Social Security, to no avail. Now, however, Social Security faces a funding gap that must eventually be filled. The discipline of Trust Fund financing means that tax increases, benefit cuts, or some combination of the two are inescapable. Action may be delayed beyond the next presidency, as current projections indicate that the Social Security Trust Fund and current revenues can sustain scheduled benefits until the mid 2030s. But that is not what the candidates propose. Voters face a choice, clear and stark, between a Democratic president who would try to maintain or raise benefits and would increase payroll taxes to pay for it, and a Republican president who would seek to cut benefits, oppose tax increases, and might well try to partially privatize Social Security.

The Environment

On no other issue is the split between the two parties wider or the stakes in their disagreement higher than on measures to deal with global warming. Leading Republican candidates have denied that global warming is occurring (Trump), scorned evidence supporting the existence of global warming as bogus (Cruz), acknowledged that global warming is occurring but not because of human actions (Rubio, Carson), or admitted that it is occurring but dismissed it as not a pressing issue (Fiorina, Christie). Congressional Republicans oppose current Administration initiatives under the Clean Air Act to curb emission of greenhouse gases.

Democratic candidates uniformly agree that global warming is occurring and that it results from human activities. They support measures to lower those emissions by amounts similar to those embraced in the Paris accords of December 2015 as essential to curb the speed and ultimate extent of global warming.

Climate scientists and economists are nearly unanimous that unabated emissions of greenhouse gases pose serious risks of devastating and destabilizing outcomes—that climbing average temperatures could render some parts of the world uninhabitable, that increases in sea levels that will inundate coastal regions inhabited by tens of millions of people, and that storms, droughts, and other climatic events will be more frequent and more destructive. Immediate actions to curb emission of greenhouse gases can reduce these effects. But no actions can entirely avoid them, and delay is costly.  Environmental economists also agree, with little partisan division, that the way to proceed is to harness market forces to reduce greenhouse gas emissions.” 

The division between the parties on global warming is not new. In 2009, the House of Representatives narrowly passed the American Clean Energy and Security Act. That law would have capped and gradually lowered greenhouse gas emissions. Two hundred eleven Democrats but only 8 Republicans voted for the bill. The Senate took no action, and the proposal died.

Now Republicans are opposing the Obama administration’s Clean Power Plan, a set of regulations under the Clean Air Act to lower emissions by power plants, which account for 40 percent of the carbon dioxide released into the atmosphere. The Clean Power Plan is a stop-gap measure. It applies only to power plants, not to other sources of emissions, and it is not nationally uniform. These shortcomings reflect the legislative authority on which the plan is based, the Clean Air Act. That law was designed to curb the local problem of air pollution, not the global damage from greenhouse gases. Environmental economists of both parties recognize that a tax or a cap on greenhouse gas emissions would be more effective and less costly than the current regulations, but superior alternatives are now politically unreachable.

Based on their statements, any of the current leading Republican candidates would back away from the recently negotiated Paris climate agreement, scuttle the Clean Power Plan, and resist any tax on greenhouse gas emissions. Any of the Democratic candidates would adhere to the Clean Power Plan and support the Paris climate agreement. One Democratic candidate has embraced a carbon tax. None has called for the extension of the Clean Power Plan to other emission sources, but such policies are consistent with their current statements.

The importance of global policy to curb greenhouse gas emissions is difficult to exaggerate. While the United States acting alone cannot entirely solve the problem, resolute action by the world’s largest economy and second largest greenhouse gas emitter is essential, in concert with other nations, to forestall climate catastrophe.

The Courts

If the next president serves two terms, as six of the last nine presidents have done, four currently sitting justices will be over age 86 and one over age 90 by the time that presidency ends—provided that they have not died or resigned.

The political views of the president have always shaped presidential choices regarding judicial appointments. As all carry life-time tenure, these appointments influence events long after the president has left office. The political importance of these appointments has always been enormous, but it is even greater now than in the past. One reason is that the jurisprudence of sitting Supreme Court justices now lines up more closely than in the past with that of the party of the president who appointed them. Republican presidents appointed all sitting justices identified as conservative; Democratic presidents appointed all sitting justices identified as liberal. The influence of the president’s politics extends to other judicial appointments as well.

A second reason is that recent judicial decisions have re-opened decisions once regarded as settled. The decision in the first case dealing with the Affordable Care Act (ACA), NFIB v. Sibelius is illustrative.

When the ACA was enacted, few observers doubted the power of the federal government to require people to carry health insurance. That power was based on a long line of decisions, dating back to the 1930s, under the Constitutional clause authorizing the federal government to regulate interstate commerce. In the 1930s, the Supreme Court rejected an older doctrine that had barred such regulations. The earlier doctrine dated from 1905 when the Court overturned a New York law that prohibited bakers from working more than 10 hours a day or 60 hours a week. The Court found in the 14th Amendment, which prohibits any state from ‘depriving any person of life, liberty or property, without due process of law,’ a right to contract previously invisible to jurists which it said the New York law violated. In the early- and mid-1930s, the Court used this doctrine to invalidate some New Deal legislation. Then the Court changed course and authorized a vast range of regulations under the Constitution’s Commerce Clause.  It was on this line of cases that supporters of the ACA relied.

Nor did many observers doubt the power of Congress to require states to broaden Medicaid coverage as a condition for remaining in the Medicaid program and receiving federal matching grants to help them pay for required medical services.

To the surprise of most legal scholars, a 5-4 Supreme Court majority ruled in NFIB v. Sibelius that the Commerce Clause did not authorize the individual health insurance mandate. But it decided, also 5 to 4, that tax penalties could be imposed on those who fail to carry insurance. The tax saved the mandate. But the decision also raised questions about federal powers under the Commerce Clause. The Court also ruled that the Constitution barred the federal government from requiring states to expand Medicaid coverage as a condition for remaining in the program. This decision was odd, in that Congress certainly could constitutionally have achieved the same objective by repealing the old Medicaid program and enacting a new Medicaid program with the same rules as those contained in the ACA that states would have been free to join or not.

NFIB v. Sibelius and other cases the Court has recently heard or soon will hear raise questions about what additional attempts to regulate interstate commerce might be ruled unconstitutional and about what limits the Court might impose on Congress’s power to require states to implement legislated rules as a condition of receiving federal financial aid. The Court has also heard, or soon will hear, a series of cases of fundamental importance regarding campaign financing, same-sex marriage, affirmative action, abortion rights, the death penalty, the delegation of powers to federal regulatory agencies, voting rights, and rules under which people can seek redress in the courts for violation of their rights.

Throughout U.S. history, the American people have granted nine appointed judges the power to decide whether the actions taken by elected legislators are or are not consistent with a constitution written more than two centuries ago. As a practical matter, the Court could not maintain this sway if it deviated too far from public opinion. But the boundaries within which the Court has substantially unfettered discretion are wide, and within those limits the Supreme Court can profoundly limit or redirect the scope of legislative authority. The Supreme Court’s switch in the 1930s from doctrines under which much of the New Deal was found to be unconstitutional to other doctrines under which it was constitutional illustrates the Court’s sensitivity to public opinion and the profound influence of its decisions.

The bottom line is that the next president will likely appoint enough Supreme Court justices and other judges to shape the character of the Supreme Court and of lower courts with ramifications both broad and enduring on important aspects of every person’s life.

***

The next president will preside over critical decisions relating to health care policy, Social Security, and environmental policy, and will shape the character of the Supreme Court for the next generation. Profound differences distinguish the two major parties on these and many other issues. A recent survey of members of the House of Representatives found that on a scale of ‘liberal to conservative’ the most conservative Democrat was more liberal than the least conservative Republican. Whatever their source, these divisions are real.  The examples cited here are sufficient to show that the 2016 election richly merits the overworked term 'watershed'—it will be the most consequential presidential election in a very long time.

Authors

      
 
 




or

The impossible (pipe) dream—single-payer health reform


Led by presidential candidate Bernie Sanders, one-time supporters of ‘single-payer’ health reform are rekindling their romance with a health reform idea that was, is, and will remain a dream.  Single-payer health reform is a dream because, as the old joke goes, ‘you can’t get there from here.

Let’s be clear: opposing a proposal only because one believes it cannot be passed is usually a dodge.One should judge the merits. Strong leaders prove their skill by persuading people to embrace their visions. But single-payer is different. It is radical in a way that no legislation has ever been in the United States.

Not so, you may be thinking. Remember such transformative laws as the Social Security Act, Medicare, the Homestead Act, and the Interstate Highway Act. And, yes, remember the Affordable Care Act. Those and many other inspired legislative acts seemed revolutionary enough at the time. But none really was. None overturned entrenched and valued contractual and legislative arrangements. None reshuffled trillions—or in less inflated days, billions—of dollars devoted to the same general purpose as the new legislation. All either extended services previously available to only a few, or created wholly new arrangements.

To understand the difference between those past achievements and the idea of replacing current health insurance arrangements with a single-payer system, compare the Affordable Care Act with Sanders’ single-payer proposal.

Criticized by some for alleged radicalism, the ACA is actually stunningly incremental. Most of the ACA’s expanded coverage comes through extension of Medicaid, an existing public program that serves more than 60 million people. The rest comes through purchase of private insurance in “exchanges,” which embody the conservative ideal of a market that promotes competition among private venders, or through regulations that extended the ability of adult offspring to remain covered under parental plans. The ACA minimally altered insurance coverage for the 170 million people covered through employment-based health insurance. The ACA added a few small benefits to Medicare but left it otherwise untouched. It left unaltered the tax breaks that support group insurance coverage for most working age Americans and their families. It also left alone the military health programs serving 14 million people. Private nonprofit and for-profit hospitals, other vendors, and privately employed professionals continue to deliver most care.

In contrast, Senator Sanders’ plan, like the earlier proposal sponsored by Representative John Conyers (D-Michigan) which Sanders co-sponsored, would scrap all of those arrangements. Instead, people would simply go to the medical care provider of their choice and bills would be paid from a national trust fund. That sounds simple and attractive, but it raises vexatious questions.

  • How much would it cost the federal government? Where would the money to cover the costs come from?
  • What would happen to the $700 billion that employers now spend on health insurance?
  • How would the $600 billion a year reductions in total health spending that Sanders says his plan would generate come from?
  • What would happen to special facilities for veterans and families of members of the armed services?

Sanders has answers for some of these questions, but not for others. Both the answers and non-answers show why single payer is unlike past major social legislation.

The answer to the question of how much single payer would cost the federal government is simple: $4.1 trillion a year, or $1.4 trillion more than the federal government now spends on programs that the Sanders plan would replace. The money would come from new taxes. Half the added revenue would come from doubling the payroll tax that employers now pay for Social Security. This tax approximates what employers now collectively spend on health insurance for their employees...if they provide health insurance. But many don’t. Some employers would face large tax increases. Others would reap windfall gains.

The cost question is particularly knotty, as Sanders assumes a 20 percent cut in spending averaged over ten years, even as roughly 30 million currently uninsured people would gain coverage. Those savings, even if actually realized, would start slowly, which means cuts of 30 percent or more by Year 10. Where would they come from? Savings from reduced red-tape associated with individual insurance would cover a small fraction of this target. The major source would have to be fewer services or reduced prices. Who would determine which of the services physicians regard as desirable -- and patients have come to expect -- are no longer ‘needed’? How would those be achieved without massive bankruptcies among hospitals, as columnist Ezra Klein has suggested, and would follow such spending cuts? What would be the reaction to the prospect of drastic cuts in salaries of health care personnel – would we have a shortage of doctors and nurses? Would patients tolerate a reduction in services? If people thought that services under the Sanders plan were inadequate, would they be allowed to ‘top up’ with private insurance? If so, what happens to simplicity? If not, why not?

Let me be clear: we know that high quality health care can be delivered at much lower cost than is the U.S. norm. We know because other countries do it. In fact, some of them have plans not unlike the one Senator Sanders is proposing. We know that single-payer mechanisms work in some countries. But those systems evolved over decades, based on gradual and incremental change from what existed before. That is the way that public policy is made in democracies. Radical change may occur after a catastrophic economic collapse or a major war. But in normal times, democracies do not tolerate radical discontinuity. If you doubt me, consider the tumult precipitated by the really quite conservative Affordable Care Act.


Editor's note: This piece originally appeared in Newsweek.

Authors

Publication: Newsweek
Image Source: © Jim Young / Reuters
      
 
 




or

The stunning ignorance of Trump's health care plan


One cannot help feeling a bit silly taking seriously the policy proposals of a person who seems not to take policy seriously himself. Donald Trump's policy positions have evolved faster over the years than a teenager's moods. He was for a woman's right to choose; now he is against it. He was for a wealth tax to pay off the national debt before proposing a tax plan that would enrich the wealthy and balloon the national debt. He was for universal health care but opposed to any practical way to achieve it.

Based on his previous flexibility, Trump's here-today proposals may well be gone tomorrow. As a sometime-Democrat, sometime-Republican, sometime-independent, who is now the leading candidate for the Republican presidential nomination, Trump has just issued his latest pronouncements on health care policy. So, what the hell, let's give them more respect than he has given his own past policy statements.

Perhaps unsurprisingly, those earlier pronouncements are notable for their detachment from fact and lack of internal logic. The one-time supporter of universal health care now joins other candidates in his newly-embraced party in calling for repeal of the only serious legislative attempt in American history to move toward universal coverage, the Affordable Care Act. Among his stated reasons for repeal, he alleges that the act has "resulted in runaway costs," promoted health care rationing, reduced competition and narrowed choice.

Each of these statements is clearly and demonstrably false. Health care spending per person has grown less rapidly in the six years since the Affordable Care Act was enacted than in any corresponding period in the last four decades. There is now less health care rationing than at any time in living memory, if the term rationing includes denial of care because it is unaffordable. Rationing because of unaffordability is certainly down for the more than 20 million people who are newly insured because of the Affordable Care Act. Hospital re-admissions, a standard indicator of low quality, are down, and the health care exchanges that Trump now says he would abolish, but that resemble the "health marts" he once espoused, have brought more choice to individual shoppers than private employers now offer or ever offered their workers.

Trump's proposed alternative to the Affordable Care Act is even worse than his criticism of it. He would retain the highly popular provision in the act that bars insurance companies from denying people coverage because of preexisting conditions, a practice all too common in the years before the health care law. But he would do away with two other provisions of the Affordable Care Act that are essential to make that reform sustainable: the mandate that people carry insurance and the financial assistance to make that requirement feasible for people of modest means.

Without those last two provisions, barring insurers from using preexisting conditions to jack up premiums or deny coverage would destroy the insurance market. Why? Because without the mandate and the financial aid, people would have powerful financial incentives to wait until they were seriously ill to buy insurance. They could safely do so, confident that some insurer would have to sell them coverage as soon as they became ill. Insurers that set affordable prices would go broke. If insurers set prices high enough to cover costs, few customers could afford them.

In simple terms, Trump's promise to bar insurers from using preexisting conditions to screen customers but simultaneously to scrap the companion provisions that make the bar feasible is either the fraudulent offer of a huckster who takes voters for fools, or clear evidence of stunning ignorance about how insurance works. Take your pick.

Unfortunately, none of the other Republican candidates offers a plan demonstrably superior to Trump's. All begin by calling for repeal and replacement of the Affordable Care Act. But none has yet advanced a well-crafted replacement.

It is not that the Affordable Care Act is perfect legislation. It isn't. But, as the old saying goes, you can't beat something with nothing. And so far as health care reform is concerned, nothing is what the Republican candidates now have on offer.


Editor's note: This piece originally appeared in U.S. News and World Report.

Authors

Publication: U.S. News and World Report
Image Source: © Lucy Nicholson / Reuters
      
 
 




or

Recent Social Security blogs—some corrections


Recently, Brookings has posted two articles commenting on proposals to raise the full retirement age for Social Security retirement benefits from 67 to 70. One revealed a fundamental misunderstanding of how the program actually works and what the effects of the policy change would be. The other proposes changes to the system that would subvert the fundamental purpose of the Social Security in the name of ‘reforming’ it.

A number of Republican presidential candidates and others have proposed raising the full retirement age. In a recent blog, Robert Shapiro, a Democrat, opposed this move, a position I applaud. But he did so based on alleged effects the proposal would in fact not have, and misunderstanding about how the program actually works. In another blog, Stuart Butler, a conservative, noted correctly that increasing the full benefit age would ‘bolster the system’s finances,’ but misunderstood this proposal’s effects. He proposed instead to end Social Security as a universal pension based on past earnings and to replace it with income-related welfare for the elderly and disabled (which he calls insurance).

Let’s start with the misunderstandings common to both authors and to many others. Each writes as if raising the ‘full retirement age’ from 67 to 70 would fall more heavily on those with comparatively low incomes and short life expectancies. In fact, raising the ‘full retirement age’ would cut Social Security Old-Age Insurance benefits by the same proportion for rich and poor alike, and for people whose life expectancies are long or short. To see why, one needs to understand how Social Security works and what ‘raising the full retirement age’ means.

People may claim Social Security retirement benefits starting at age 62. If they wait, they get larger benefits—about 6-8 percent more for each year they delay claiming up to age 70. Those who don’t claim their benefits until age 70 qualify for benefits -- 77 percent higher than those with the same earnings history who claim at age 62. The increments approximately compensate the average person for waiting, so that the lifetime value of benefits is independent of the age at which they claim. Mechanically, the computation pivots on the benefit payable at the ‘full retirement age,’ now age 66, but set to increase to age 67 under current law. Raising the full retirement age still more, from 67 to 70, would mean that people age 70 would get the same benefit payable under current law at age 67. That is a benefit cut of 24 percent. Because the annual percentage adjustment for waiting to claim would be unchanged, people who claim benefits at any age, down to age 62, would also receive benefits reduced by 24 percent.

In plain English, ‘raising the full benefit age from 67 to 70' is simply a 24 percent across-the-board cut in benefits for all new claimants, whatever their incomes and whatever their life-expectancies.

Thus, Robert Shapiro mistakenly writes that boosting the full-benefit age would ‘effectively nullify Social Security for millions of Americans’ with comparatively low life expectancies. It wouldn’t. Anyone who wanted to claim benefits at age 62 still could. Their benefits would be reduced. But so would benefits of people who retire at older ages.

Equally mistaken is Stuart Butler’s comment that increasing the full-benefit age from 67 to 70 would ‘cut total lifetime retirement benefits proportionately more for those on the bottom rungs of the income ladder.’ It wouldn’t. The cut would be proportionately the same for everyone, regardless of past earnings or life expectancy.

Both Shapiro and Butler, along with many others including my other colleagues Barry Bosworth and Gary Burtless, have noted correctly that life expectancies of high earners have risen considerably, while those of low earners have risen little or not at all. As a result, the lifetime value of Social Security Old-Age Insurance benefits has grown more for high- than for low-earners. That development has been at least partly offset by trends in Social Security Disability Insurance, which goes disproportionately to those with comparatively low earnings and life expectancies and which has been growing far faster than Old-Age Insurance, the largest component of Social Security.

But even if the lifetime value of all Social Security benefits has risen faster for high earners than for low earners, an across the board cut in benefits does nothing to offset that trend. In the name of lowering overall Social Security spending, it would cut benefits by the same proportion for those whose life expectancies have risen not at all because the life expectancy of others has risen. Such ‘evenhandeness’ calls to mind Anatole France’s comment that French law ‘in its majestic equality, ...forbids rich and poor alike to sleep under bridges, beg in streets, or steal loaves of bread.’

Faulty analyses, such as those of Shapiro and Butler, cannot conceal a genuine challenge to policy makers. Social Security does face a projected, long-term funding shortfall. Trends in life expectancies may well have made the system less progressive overall than it was in the past. What should be done?

For starters, one needs to recognize that for those in successive age cohorts who retire at any given age, rising life expectancy does not lower, but rather increases their need for Social Security retirement benefits because whatever personal savings they may have accumulated gets stretched more thinly to cover more retirement years.

For those who remain healthy, the best response to rising longevity may be to retire later. Later retirement means more time to save and fewer years to depend on savings. Here is where the wrong-headedness of Butler’s proposal, to phase down benefits for those with current incomes of $25,000 or more and eliminate them for those with incomes over $100,000, becomes apparent. The only source of income for full retirees is personal savings and, to an ever diminishing degree, employer-financed pensions. Converting Social Security from a program whose benefits are based on past earnings to one that is based on current income from savings would impose a tax-like penalty on such savings, just as would a direct tax on those savings. Conservatives and liberals alike should understand that taxing something is not the way to encourage it.

Still, working longer by definition lowers retirement income needs. That is why some analysts have proposed raising the age at which retirement benefits may first be claimed from age 62 to some later age. But this proposal, like across-the-board benefit cuts, falls alike on those who can work longer without undue hardship and on those in physically demanding jobs they can no longer perform, those whose abilities are reduced, and those who have low life expectancies. This group includes not only blue-collar workers, but also many white-collar employees, as indicated by a recent study of the Boston College Retirement Center. If entitlement to Social Security retirement benefits is delayed, it is incumbent on policymakers to link that change to other ‘backstop’ policies that protect those for whom continued work poses a serious burden. It is also incumbent on private employers to design ways to make workplaces friendlier to an aging workforce.

The challenge of adjusting Social Security in the face of unevenly distributed increases in longevity, growing income inequality, and the prospective shortfall in Social Security financing is real. The issues are difficult. But solutions are unlikely to emerge from confusion about the way Social Security operates and the actual effects of proposed changes to the program. And it will not be advanced by proposals that would bring to Social Security the failed Vietnam War strategy of destroying a village in order to save it.

Authors

Image Source: © Sam Mircovich / Reuters
      
 
 




or

Disability insurance: The Way Forward


Editor’s note: The remarks below were delivered to the Committee for a Responsible Federal Budget on release of their report on the SSDI Solutions Initiative

I want to thank Marc Goldwein for inviting me to join you for today’s event. We all owe thanks to Jim McCrery and Earl Pomeroy for devoting themselves to the SSDI Solutions Initiative, to the staff of CFRB who backed them up, and most of all to the scholars and practitioners who wrote the many papers that comprise this effort. This is the sort of practical, problem-solving enterprise that this town needs more of. So, to all involved in this effort, ‘hats off’ and ‘please, don’t stop now.’

The challenge of improving how public policy helps people with disabilities seemed urgent last year. Depletion of the Social Security Disability Insurance trust loomed. Fears of exploding DI benefit rolls were widespread and intense.

Congress has now taken steps that delay projected depletion until 2022. Meticulous work by Jeffrey Liebman suggests that Disability Insurance rolls have peaked and will start falling. The Technical Panel appointed by the Social Security Advisory Board, concurred in its 2015 report. With such ‘good’ news, it is all too easy to let attention drift to other seemingly more pressing items.

But trust fund depletion and growing beneficiary rolls are not the most important reasons why policymakers should be focusing on these programs.

The primary reason is that the design and administration of disability programs can be improved with benefit to taxpayers and to people with disabilities alike. And while 2022 seems a long time off, doing the research called for in the SSDI Solutions Initiative will take all of that time and more. So, it is time to get to work, not to relax.

Before going any further, I must make a disclaimer. I was invited to talk here as chair of the Social Security Advisory Board. Everything I am going to say from now on will reflect only my personal views, not those of the other members or staff of the SSAB except where the Board has spoken as a group. The same disclaimer applies to the trustees, officers, and other staff of the Brookings Institution. Blame me, not them.

Let me start with an analogy. We economists like indices. Years ago, the late Arthur Okun came up with an index to measure how much pain the economy was inflicting on people. It was a simple index, just the sum of inflation and the unemployment rate. Okun called it the ‘misery index.’

I suggest a ‘policy misery index’—a measure of the grief that a policy problem causes us. It is the sum of a problem’s importance and difficulty. Never mind that neither ‘importance’ nor ‘difficulty’ is quantifiable. Designing and administering interventions intended to improve the lives of people with disabilities has to be at or near the top of the policy misery index.

Those who have worked on disability know what I mean. Programs for people with disabilities are hugely important and miserably hard to design and administer well. That would be true even if legislators were writing afresh on a blank legislative sheet. That they must cope with a deeply entrenched program about which analysts disagree and on which many people depend makes the problems many times more challenging.

I’m going to run through some of the reasons why designing and administering benefits for people determined to be disabled is so difficult. Some may be obvious, even banal, to the highly informed group here today. And you will doubtless think of reasons I omit.

First, the concept of disability, in the sense of a diminished capacity to work, has no clear meaning, the SSA definition of disability notwithstanding. We can define impairments. Some are so severe that work or, indeed, any other form of self-support seems impossible. But even among those with severe impairments, some people work for pay, and some don’t.

That doesn’t mean that if someone with a given impairment works, everyone with that same impairment could work if they tried hard enough. It means that physical or mental impairments incompletely identify those for whom work is not a reasonable expectation. The possibility of work depends on the availability of jobs, of services to support work effort, and of a host of personal characteristics, including functional capacities, intelligence, and grit.

That is not how the current disability determination process works. It considers the availability of jobs in the national, not the local, economy. It ignores the availability of work supports or accommodations by potential employers.

Whatever eligibility criteria one may establish for benefits, some people who really can’t work, or can’t earn enough to support themselves, will be denied benefits. And some will be awarded benefits who could work.

Good program design helps keep those numbers down. Good administration helps at least as much as, and maybe more than, program design. But there is no way to reduce the number of improper awards and improper denials to zero.

Second, the causes of disability are many and varied. Again, this observation is obvious, almost banal. Genetic inheritance, accidents and injuries, wear and tear from hard physical labor, and normal aging all create different needs for assistance.

These facts mean that people deemed unable to work have different needs. They constitute distinct interest groups, each seeking support, but not necessarily of the same kind. These groups sometimes compete with each other for always-limited resources. And that competition means that the politics of disability benefits are, shall we say, interesting.

Third, the design of programs to help people deemed unable to work is important and difficult. Moral hazard is endemic. Providing needed support and services is an act of compassion and decency. The goal is to provide such support and services while preserving incentives to work and to controlling costs borne by taxpayers.

But preserving work incentives is only part of the challenge. The capacity to work is continuous, not binary. Training and a wide and diverse range of services can help people perform activities of daily living and work.

Because resources are scarce, policy makers and administrators have to sort out who should get those services. Should it be those who are neediest? Those who are most likely to recover full capacities? Triage is inescapable. It is technically difficult. And it is always ethically fraught.

Designing disability benefit programs is hard. But administering them well is just as important and at least as difficult.

These statements may also be obvious to those who here today. But recent legislation and administrative appropriations raise doubts about whether they are obvious to or accepted by some members of Congress.

Let’s start with program design. We can all agree, I think, that incentives matter. If benefits ceased at the first dollar earned, few who come on the rolls would ever try to work.

So, Congress, for many years, has allowed beneficiaries to earn any amount for a brief period and small amounts indefinitely without losing eligibility. Under current law, there is a benefit cliff. If—after a trial work period—beneficiaries earn even $1 more than what is called substantial gainful activity, $1,130 in 2016, their benefit checks stop. They retain eligibility for health coverage for a while even after they leave the rolls. And for an extended period they may regain cash and health benefits without delay if their earnings decline.

Members of Congress have long been interested in whether a more gradual phase-out of benefits as earnings rise might encourage work. Various aspects of the current Disability Insurance program reflect Congress’s desire to encourage work.

The so-called Benefit Offset National Demonstration—or BOND—was designed to test the impact on labor supply by DI beneficiaries of one formula—replacing the “cliff” with a gradual reduction in benefits: $1 of benefit last for each $2 of earnings above the Substantial Gainful Activity level.

Alas, there were problems with that demonstration. It tested only one offset scenario – one starting point and one rate. So, there could be no way of knowing whether a 2-for-1 offset was the best way to encourage work.

And then there was the uncomfortable fact that, at the time of the last evaluation, out of 79,440 study participants only 21 experienced the offset. So there was no way of telling much of anything, other than that few people had worked enough to experience the offset.

Nor was the cause of non-response obvious. It is not clear how many demonstration participants even understood what was on offer.

Unsurprisingly, members of Congress interested in promoting work among DI recipients asked SSA to revisit the issue. The 2015 DI legislation mandates a new demonstration, christened the Promoting Opportunity Demonstration, or POD. POD uses the same 2 for 1 offset rate that BOND did, but the offset starts at an earnings level at or below earnings of $810 a month in 2016—which is well below the earnings at which the BOND phase-out began.

Unfortunately, as Kathleen Romig has pointed out in an excellent paper for the Center on Budget and Policy Priorities, this demonstration is unlikely to yield useful results. Only a very few atypical DI beneficiaries are likely to find it in their interest to participate in the demonstration, fewer even than in the BOND. That is because the POD offset begins at lower earnings than the BOND offset did. In addition, participants in POD sacrifice the right under current law that permits people receiving disability benefits to earn any amount for 9 months of working without losing any benefits.

Furthermore, the 2015 law stipulated that no Disability Insurance beneficiary could be required to participate in the demonstration or, having agreed to participate, forced to remain in the demonstration. Thus, few people are likely to respond to the POD or to remain in it.

There is a small group to whom POD will be very attractive—those few DI recipients who retain a lot of earning capacity. The POD will allow them to retain DI coverage until their earnings are quite high. For example, a person receiving a $2,000 monthly benefit—well above the average, to be sure, but well below the maximum—would remain eligible for some benefits until his or her annual earnings exceeded $57,700. I don’t know about you, but I doubt that Congress would favorably consider permanent law of this sort.

Not only would those participating be a thin and quite unrepresentative sample of DI beneficiaries in general, or even of those with some earning capacity, but selection bias resulting from the opportunity to opt out at any time would destroy the external validity of any statistical results.

Let me be clear. My comments on POD, the demonstration mandated in the 2015 legislation, are not meant to denigrate the need for, or the importance of, research on how to encourage work by DI recipients, especially those for whom financial independence is plausible. On the contrary, as I said at the outset, research is desperately needed on this issue, as well as many others. It is not yet too late to authorize a research design with a better chance of producing useful results.

But it will be too late soon. Fielding demonstrations takes time:

  • to solicit bids from contractors,
  • for contractors to formulate bids,
  • for government boards to select the best one,
  • for contractors to enroll participants,
  • for contractors to administer the demonstration,
  • and for analysts to process the data generated by the demonstrations.

That process will take all the time available between now and 2021 or 2022 when the DI trust fund will again demand attention. It will take a good deal more time than that to address the formidable and intriguing research agenda of SSDI Solutions Initiative.

I should like to conclude with plugs for two initiatives to which the Social Security Advisory Board has been giving some attention.

It takes too long for disability insurance applicants to have their cases decided. Perhaps the whole determination process should be redesigned. One of the CFRB papers proposes just that. But until that happens, it is vital to shorten the unconscionable delays separating initial denials and reconsideration from hearings before administrative law judges to which applicants are legally entitled. Procedural reforms in the hearing process might help. More ALJs surely will.

The 2015 budget act requires the Office of Personnel Management to take steps that will help increase the number of ALJs hired. I believe that the new director, Beth Colbert, is committed to reforms. But it is very hard to change legal interpretations that have hampered hiring for years and the sluggish bureaucratic culture that fostered them.

So, the jury is out on whether OPM can deliver. In a recent op-ed in Politico, Lanhee Chen, a Republican member of the SSAB, and I jointly endorsed urged Congress to be ready, if OPM fails to deliver on more and better lists of ALJ candidates and streamlined procedures for their appointment, to move the ALJ examination authority to another federal organization, such as the Administrative Conference of the United States.

Lastly, there is a facet of income support policy that we on the SSAB all agree merits much more attention than it has received. Just last month, the SSAB released a paper entitled Representative Payees: A Call to Action. More than eight million beneficiaries have been deemed incapable of managing $77 billion in benefits that the Social Security Administration provided them in 2014.

We believe that serious concern is warranted about all aspects of the representative payee program—how this infringement of personal autonomy is found to be necessary, how payees are selected, and how payee performance is monitored.

Management of representative payees is a particular challenge for the Social Security Administration. Its primary job is to pay cash benefits in the right amount to the right person at the right time. SSA does that job at rock-bottom costs and with remarkable accuracy. It is handing rapidly rising workloads with budgets that have barely risen. SSA is neither designed nor staffed to provide social services. Yet determining the need for, selecting, and monitoring representative payees is a social service function.

As the Baby Boom ages, the number of people needing help in administering cash benefits from the Social Security Administration—and from other agencies such as the Veterans Administration—will grow. So will the number needing help in making informed choices under Medicare and Medicaid.

The SSAB is determined to look into this challenge and to make constructive suggestions. We are just beginning and invite others to join in studying what I have called “the most important problem the public has never heard of.”

Living with disabilities today is markedly different from what it was in 1956 when the Disability Insurance program began. Yet, the DI program has changed little. Beneficiaries and taxpayers are pay heavily the failure of public policy to apply what has been learned over the past six decades about health, disability, function, and work.

I hope that SSA and Congress will use well the time until it next must legislate on Disability Insurance. The DI rolls are stabilizing. The economy has grown steadily since the Great Recession. Congress has reinstated demonstration authority. With adequate funding for research and testing, the SSA can rebuild its research capability. Along with the external research community, it can identify what works and help Congress improve the DI program for beneficiaries and taxpayers alike. The SSDI Solutions Initiative is a fine roadmap.

Authors

Publication: Committee for a Responsible Federal Budget
Image Source: © Max Whittaker / Reuters
      
 
 




or

The next stage in health reform


Health reform (aka Obamacare) is entering a new stage. The recent announcement by United Health Care that it will stop selling insurance to individuals and families through most health insurance exchanges marks the transition. In the next stage, federal and state policy makers must decide how to use broad regulatory powers they have under the Affordable Care Act (ACA) to stabilize, expand, and diversify risk pools, improve local market competition, encourage insurers to compete on product quality rather than premium alone, and promote effective risk management. In addition, insurance companies must master rate setting, plan design, and network management and effectively manage the health risk of their enrollees in order to stay profitable, and consumers must learn how to choose and use the best plan for their circumstances.

Six months ago, United Health Care (UHC) announced that it was thinking about pulling out of the ACA exchanges. Now, they are pulling out of all but a “handful” of marketplaces. UHC is the largest private vendor of health insurance in the nation. Nonetheless, the impact on people who buy insurance through the ACA exchanges will be modest, according to careful analyses from the Kaiser Family Foundation and the Urban Institute. The effect is modest for three reasons. One is that in some states UHC focuses on group insurance, not on insurance sold to individuals, where they are not always a major presence. Secondly, premiums of UHC products in individual markets are relatively high. Third, in most states and counties ACA purchasers will still have a choice of two or more other options. In addition, UHC’s departure may coincide with or actually cause the entry of other insurers, as seems to be happening in Iowa.

The announcement by UHC is noteworthy, however. It signals the beginning for ACA exchanges of a new stage in their development, with challenges and opportunities different from and in many ways more important than those they faced during the first three years of operation, when the challenge was just to get up and running. From the time when HealthCare.Gov and the various state exchanges opened their doors until now, administrators grappled non-stop with administrative challenges—how to enroll people, helping them make an informed choice among insurance offerings, computing the right amount of assistance each individual or family should receive, modifying plans when income or family circumstances change, and performing various ‘back office’ tasks such as transferring data to and from insurance companies. The chaotic first weeks after the exchanges opened on October 1, 2013 have been well documented, not least by critics of the ACA. Less well known are the countless behind-the-scenes crises, patches, and work-arounds that harried exchange administrators used for years afterwards to keep the exchanges open and functioning.

The ACA forced not just exchange administrators but also insurers to cope with a new system and with new enrollees. Many new exchange customers were uninsured prior to signing up for marketplace coverage. Insurers had little or no information on what their use of health care would be. That meant that insurers could not be sure where to set premiums or how aggressively to try to control costs, for example by limiting networks of physicians and hospitals enrollees could use. Some did the job well or got lucky. Some didn’t. United seems to have fallen in the second category. United could have stayed in the 30 or so state markets they are leaving and tried to figure out ways to compete more effectively, but since their marketplace premiums were often not competitive and most of their business was with large groups, management decided to focus on that highly profitable segment of the insurance market. Some insurers, are seeking sizeable premium increases for insurance year 2017, in part because of unexpectedly high usage of health care by new exchange enrollees.

United is not alone in having a rough time in the exchanges. So did most of the cooperative plans that were set up under the ACA. Of the 23 cooperative plans that were established, more than half have gone out of business and more may follow. These developments do not signal the end of the ACA or even indicate a crisis. They do mark the end of an initial period when exchanges were learning how best to cope with clerical challenges posed by a quite complicated law and when insurance companies were breaking into new markets. In the next phase of ACA implementation, federal and state policy makers will face different challenges: how to stabilize, expand, and diversify marketplace risk pools, promote local market competition, and encourage insurers to compete on product quality rather than premium alone. Insurance company executives will have to figure out how to master rate setting, plan design, and network management and manage risk for customers with different characteristics than those to which they have become accustomed.

Achieving these goals will require state and federal authorities to go beyond the core implementation decisions that have absorbed most of their attention to date and exercise powers the ACA gives them. For example, section 1332 of the ACA authorizes states to apply for waivers starting in 2017 under which they can seek to achieve the goals of the 2010 law in ways different from those specified in the original legislation. Along quite different lines, efforts are already underway in many state-based marketplaces, such as the District of Columbia, to expand and diversify the individual market risk pool by expanding marketing efforts to enroll new consumers, especially young adults. Minnesota’s Health Care Task Force recently recommended options to stabilize marketplace premiums, including reinsurance, maximum limits on the excess capital reserves or surpluses of health plans, and the merger of individual and small group markets, as Massachusetts and Vermont have done.

In normal markets, prices must cover costs, and while some companies prosper, some do not. In that respect, ACA markets are quite normal. Some regional and national insurers, along with a number of new entrants, have experienced losses in their marketplace business in 2016. One reason seems to be that insurers priced their plans aggressively in 2014 and 2015 to gain customers and then held steady in 2016. Now, many are proposing significant premium hikes for 2017.

Others, like United, are withdrawing from some states. ACA exchange administrators and state insurance officials must now take steps to encourage continued or new insurer participation, including by new entrants such as Medicaid managed care organizations (MCOs). For example, in New Mexico, where in 2016 Blue Cross Blue Shield withdrew from the state exchange, state officials now need to work with that insurer to ensure a smooth transition as it re-enters the New Mexico marketplace and to encourage other insurers to join it. In addition, state insurance regulators can use their rate review authority to benefit enrollees by promoting fair and competitive pricing among marketplace insurers. During the rate review process, which sometimes evolves into a bargaining process, insurance regulators often have the ability to put downward pressure on rates, although they must be careful to avoid the risk of underpricing of marketplace plans which could compromise the financial viability of insurers and cause them to withdraw from the market. Exchanges have an important role in the affordability of marketplace plans too. For example ACA marketplace officials in the District of Columbia and Connecticut work closely with state regulators during the rate review process in an effort to keep rates affordable and adequate to assure insurers a fair rate of return.

Several studies now indicate that in selecting among health insurance plans people tend to give disproportionate weight to premium price, and insufficient attention to other cost provisions—deductibles and cost sharing—and to quality of service and care. A core objective of the ACA is to encourage insurance customers to evaluate plans comprehensively. This objective will be hard to achieve, as health insurance is perhaps the most complicated product most people buy. But it will be next to impossible unless customers have tools that help them take account of the cost implications of all plan features and report accurately and understandably on plan quality and service. HealthCare.gov and state-based marketplaces, to varying degrees, are already offering consumers access to a number of decision support tools, such as total cost calculators, integrated provider directories, and formulary look-ups, along with tools that indicate provider network size. These should be refined over time. In addition, efforts are now underway at the federal and state level to provide more data to consumers so that they can make quality-driven plan choices. In 2018, the marketplaces will be required to display federally developed quality ratings and enrollee satisfaction information. The District of Columbia is examining the possibility of adding additional measures. California has proposed that starting in 2018 plans may only contract with providers and hospitals that have met state-specified metrics of quality care and promote safety of enrollees at a reasonable price. Such efforts will proliferate, even if not all succeed.

Beyond regulatory efforts noted above, insurance companies themselves have a critical role to play in contributing to the continued success of the ACA. As insurers come to understand the risk profiles of marketplace enrollees, they will be better able to set rates, design plans, and manage networks and thereby stay profitable. In addition, insurers are best positioned to maintain the stability of their individual market risk pools by developing and financing marketing plans to increase the volume and diversity of their exchange enrollments. It is important, in addition, that insurers, such as UHC, stop creaming off good risks from the ACA marketplaces by marketing limited coverage insurance products, such as dread disease policies and short term plans. If they do not do so voluntarily, state insurance regulators and the exchanges should join in stopping them from doing so.

Most of the attention paid to the ACA to date has focused on efforts to extend health coverage to the previously uninsured and to the administrative stumbles associated with that effort. While insurance coverage will broaden further, the period of rapid growth in coverage is at an end. And while administrative challenges remain, the basics are now in place. Now, the exchanges face the hard work of promoting vigorous and sustainable competition among insurers and of providing their customers with information so that insurers compete on what matters: cost, service, and quality of health care.

Editor's note: This piece originally appeared in Real Clear Markets. Kevin Lucia and Justin Giovannelli contributed to this article with generous support from The Commonwealth Fund.

Authors

Image Source: © Brian Snyder / Reuters
      
 
 




or

Brookings experts on the implications of COVID-19 for the Middle East and North Africa

The novel coronavirus was first identified in January 2020, having caused people to become ill in Wuhan, China. Since then, it has rapidly spread across the world, causing widespread fear and uncertainty. At the time of writing, close to 500,000 cases and 20,000 deaths had been confirmed globally; these numbers continue to rise at an…

       




or

To fast or not to fast—that is the coronavirus question for Ramadan

       




or

The end of Kansas-Missouri’s border war should mark a new chapter for both states’ economies

This week, Governor Kelly of Kansas and Governor Parson of Missouri signed a joint agreement to end the longstanding economic border war between their two states. For years, Kansas and Missouri taxpayers subsidized the shuffling of jobs across the state line that runs down the middle of the Kansas City metro area, with few new…

       




or

Boosting growth across more of America

On Wednesday, January 29, the Brookings Metropolitan Policy Program (Brookings Metro) hosted “Boosting Growth Across More of America: Pushing Back Against the ‘Winner-take-most’ Economy,” an event delving into the research and proposals offered in Robert D. Atkinson, Mark Muro, and Jacob Whiton’s recent report “The case for growth centers: How to spread tech innovation across…

       




or

Taking the off-ramp: A path to preventing terrorism

      
 
 




or

The U.S. needs a national prevention network to defeat ISIS

The recent release of a Congressional report highlighting that the United States is the “top target” of the Islamic State coincided with yet another gathering of members of the global coalition to counter ISIL to take stock of the effort. There, Defense Secretary Carter echoed the sentiments of an increasing number of political and military leaders when he said that military […]

      
 
 




or

An agenda for reducing poverty and improving opportunity


SUMMARY:
With the U.S. poverty rate stuck at around 15 percent for years, it’s clear that something needs to change, and candidates need to focus on three pillars of economic advancement-- education, work, family -- to increase economic mobility, according to Brookings Senior Fellow Isabel Sawhill and Senior Research Assistant Edward Rodrigue.

“Economic success requires people’s initiative, but it also requires us, as a society, to untangle the web of disadvantages that make following the sequence difficult for some Americans. There are no silver bullets. Government cannot do this alone. But government has a role to play in motivating individuals and facilitating their climb up the economic ladder,” they write.

The pillar of work is the most urgent, they assert, with every candidate needing to have concrete jobs proposals. Closing the jobs gap (the difference in work rates between lower and higher income households) has a huge effect on the number of people in poverty, even if the new workers hold low-wage jobs. Work connects people to mainstream institutions, helps them learn new skills, provides structure to their lives, and provides a sense of self-sufficiency and self-respect, while at the aggregate level, it is one of the most important engines of economic growth. Specifically, the authors advocate for making work pay (EITC), a second-earner deduction, childcare assistance and paid leave, and transitional job programs. On the education front, they suggest investment in children at all stages of life: home visiting, early childhood education, new efforts in the primary grades, new kinds of high schools, and fresh policies aimed at helping students from poor families attend and graduate from post-secondary institutions. And for the third prong, stable families, Sawhill and Rodrique suggest changing social norms around the importance of responsible, two-person parenthood, as well as making the most effective forms of birth control (IUDs and implants) more widely available at no cost to women.

“Many of our proposals would not only improve the life prospects of less advantaged children; they would pay for themselves in higher taxes and less social spending. The candidates may have their own blend of responses, but we need to hear less rhetoric and more substantive proposals from all of them,” they conclude.

Downloads

Authors

     
 
 




or

Campaign 2016: Ideas for reducing poverty and improving economic mobility


We can be sure that the 2016 presidential candidates, whoever they are, will be in favor of promoting opportunity and cutting poverty. The question is: how? In our contribution to a new volume published today, “Campaign 2016: Eight big issues the presidential candidates should address,” we show that people who clear three hurdles—graduating high school, working full-time, and delaying parenthood until they in a stable, two-parent family—are very much more likely to climb to middle class than fall into poverty:

But what specific policies would help people achieve these three benchmarks of success?  Our paper contains a number of ideas that candidates might want to adopt. Here are a few examples: 

1. To improve high school graduation rates, expand “Small Schools of Choice,” a program in New York City, which replaced large, existing schools with more numerous, smaller schools that had a theme or focus (like STEM or the arts). The program increased graduation rates by about 10 percentage points and also led to higher college enrollment with no increase in costs.

2. To support work, make the Child and Dependent Care Tax Credit (CDCTC) refundable and cap it at $100,000 in household income. Because the credit is currently non-refundable, low-income families receive little or no benefit, while those with incomes above $100,000 receive generous tax deductions. This proposal would make the program more equitable and facilitate low-income parents’ labor force participation, at no additional cost.

3. To strengthen families, make the most effective forms of birth control (IUDs and implants) more widely available at no cost to women, along with good counselling and a choice of all FDA-approved methods. Programs that have done this in selected cities and states have reduced unplanned pregnancies, saved money, and given women better ability to delay parenthood until they and their partners are ready to be parents. Delayed childbearing reduces poverty rates and leads to better prospects for the children in these families.

These are just a few examples of good ideas, based on the evidence, of what a candidate might want to propose and implement if elected. Additional ideas and analysis will be found in our longer paper on this topic.

Authors

Image Source: © Darren Hauck / Reuters
     
 
 




or

The decline in marriage and the need for more purposeful parenthood


If you’re reading this article, chances are you know people who are still getting married. But it’s getting rarer, especially among the youngest generation and those who are less educated. We used to assume people would marry before having children. But marriage is no longer the norm. Half of all children born to women under 30 are born out of wedlock. The proportion is even higher among those without a college degree.

What’s going on here? Most of today’s young adults don’t feel ready to marry in their early 20s. Many have not completed their educations; others are trying to get established in a career; and many grew up with parents who divorced and are reluctant to make a commitment or take the risks associated with a legally binding tie.

But these young people are still involved in romantic relationships. And yes, they are having sex. Any stigma associated with premarital sex disappeared a long time ago, and with sex freely available, there’s even less reason to bother with tying the knot. The result: a lot of drifting into unplanned pregnancies and births to unmarried women and their partners with the biggest problems now concentrated among those in their 20s rather than in their teens. (The teen birth rate has actually declined since the early 1990s.)

Does all of this matter? In a word, yes.

These trends are not good for the young people involved and they are especially problematic for the many children being born outside marriage. The parents may be living together at the time of the child’s birth but these cohabiting relationships are highly unstable. Most will have split before the child is age 5.

Social scientists who have studied the resulting growth of single-parent families have shown that the children in these families don’t fare as well as children raised in two-parent families. They are four or five times as likely to be poor; they do less well in school; and they are more likely to engage in risky behaviors as adolescents. Taxpayers end up footing the bill for the social assistance that many of these families need.

Is there any way to restore marriage to its formerly privileged position as the best way to raise children? No one knows. The fact that well-educated young adults are still marrying is a positive sign and a reason for hope. On the other hand, the decline in marriage and rise in single parenthood has been dramatic and the economic and cultural transformations behind these trends may be difficult to reverse.

Women are no longer economically dependent on men, jobs have dried up for working-class men, and unwed parenthood is no longer especially stigmatized. The proportion of children raised in single-parent homes has, as a consequence, risen from 5 percent in 1960 to about 30 percent now.

Conservatives have called for the restoration of marriage as the best way to reduce poverty and other social ills. However, they have not figured out how to do this.

The George W. Bush administration funded a series of marriage education programs that failed to move the needle in any significant way. The Clinton administration reformed welfare to require work and thus reduced any incentive welfare might have had in encouraging unwed childbearing. The retreat from marriage has continued despite these efforts. We are stuck with a problem that has no clear governmental solution, although religious and civic organizations can still play a positive role.

But perhaps the issue isn’t just marriage. What may matter even more than marriage is creating stable and committed relationships between two mature adults who want and are ready to be parents before having children. That means reducing the very large fraction of births to young unmarried adults that occur before these young people say they are ready for parenthood.

Among single women under the age of 30, 73 percent of all pregnancies are, according to the woman herself, either unwanted or badly mistimed. Some of these women will go on to have an abortion but 60 percent of all of the babies born to this group are unplanned.

As I argue in my book, “Generation Unbound,” we need to combine new cultural messages about the importance of committed relationships and purposeful childbearing with new ways of helping young adults avoid accidental pregnancies. The good news here is that new forms of long-acting but fully reversible contraception, such as the IUD and the implant, when made available to young women at no cost and with good counseling on their effectiveness and safety, have led to dramatic declines in unplanned pregnancies. Initiatives in the states of Colorado and Iowa, and in St. Louis have shown what can be accomplished on this front.

Would greater access to the most effective forms of birth control move the needle on marriage? Quite possibly. Unencumbered with children from prior relationships and with greater education and earning ability, young women and men would be in a better position to marry. And even if they fail to marry, they will be better parents.

My conclusion: marriage is in trouble and, however desirable, will be difficult to restore. But we can at least ensure that casual relationships outside of marriage don’t produce children before their biological parents are ready to take on one of the most difficult social tasks any of us ever undertakes: raising a child. Accidents happen; a child shouldn’t be one of them.


Editor's Note: this piece originally appeared in Inside Sources.


Publication: Inside Sources
Image Source: © Lucy Nicholson / Reuters