mo

China’s Land Grab is Undermining Grassroots Democracy

After continuous confrontation between villagers and local officials for almost four months, the land grab in the fishing village of Wukan, in Guandong province, China, has now led to the death of one of the elected village leaders in police custody, and further escalated into a violent "mass incident" with tens of thousands of farmers…

      
 
 




mo

Louisiana’s prescription drug experiment: A model for the nation?

The high cost of prescription drugs has become an increasingly pressing concern for policymakers, insurers, and families. New drugs—like those now available for hepatitis C— offer tremendous medical benefits, but at a cost that puts them out of reach for many patients. In an effort to address the affordability dilemma, the Louisiana Department of Health…

       




mo

Commodities, industry, and the African Growth Miracle

The 2016 Spring Meetings of the International Monetary Fund (IMF) and World Bank occur during uncertain times for the “African Growth Miracle.” After more than two decades of sustained economic expansion, growth in sub-Saharan Africa slowed to 3.4 percent in 2015, the weakest performance since 2009. The growth slow-down reflects lower commodity prices, declining growth…

      
 
 




mo

Africa Industrialization Day: Moving from rhetoric to reality

Sunday, November 20 marked another United Nations “Africa Industrialization Day.” If anything, the level of attention to industrializing Africa coming from regional organizations, the multilateral development banks, and national governments has increased since the last one. This year, the new president of the African Development Bank flagged industrial development as one of his “high five”…

      
 
 




mo

Italy’s hazardous new experiment: Genetically modified populism

Finally, three months after its elections, Italy has produced a new creature in the political biosphere: a “populist but technocratic” government. What we will be watching is not really the result of a Frankenstein experiment, rather something closer to a genetically modified organism. Such a pairing is probably something unheard of in history: Into a…

       




mo

Italy’s political turmoil shows that parliaments can confront populists

Italy has a certain experience in changes of government, having seen 68 different governments in 73 years. However, even by Italian standards, what happened this summer to the first populist government in an advanced economy is unusual, to say the least. It is also instructive for other countries, showing the key roles of parliaments and…

       




mo

The U.S. May Need More Lawyers!

Tens of billions of consumer dollars are lost to the legal profession due to industry standards and regulations that have created a lawyer monopoly, write Clifford Winston and Robert Crandall. Winston and Crandall propose opening up the legal field and utilizing innovative IT and online services to alleviate demand for routine law work.

      
 
 




mo

The Law Firm Business Model Is Dying

Clifford Winston and Robert Crandall say that the bankruptcies of major, long-standing law firms signal a change in how businesses and the public are choosing to find legal services. Winston and Crandall argue that deregulation would revitalize the industry, bringing new ideas, technologies, talents and operating procedures into the practice of law.

      
 
 




mo

Islamic Comrades No More

The coup last July in Egypt opened a new divide in the Middle East, alienating the Gulf monarchies from the Muslim Brotherhood. Vali Nasr looks at why this is a momentous change in the region’s strategic landscape that promises to influence governments and regional alliances for years to come.

      
 
 




mo

Despite Predictions, BCRA Has Not Been a Democratic 'Suicide Bill'

During debates in Congress and in the legal battles testing its constitutionality, critics of the Bipartisan Campaign Reform Act of 2002 imagined a host of unanticipated and debilitating consequences. The law's ban on party soft money and the regulation of electioneering advertising would, they warned, produce a parade of horribles: A decline in political speech protected by the First Amendment, the demise of political parties, and the dominance of interest groups in federal election campaigns.

The forecast that attracted the most believers — among politicians, journalists, political consultants, election-law attorneys and scholars — was the claim that Democrats would be unable to compete against Republicans under the new rules, primarily because the Democrats' relative ability to raise funds would be severely crippled. One year ago, Seth Gitell in The Atlantic Monthly summarized this view and went so far as to call the new law "The Democratic Party Suicide Bill." Gitell quoted a leading Democratic Party attorney, who expressed his private view of the law as "a fascist monstrosity." He continued, "It is grossly offensive ... and on a fundamental level it's horrible public policy, because it emasculates the parties to the benefit of narrow-focus special-interest groups. And it's a disaster for the Democrats. Other than that, it's great."

The core argument was straightforward. Democratic Party committees were more dependent on soft money — unlimited contributions from corporations, unions and individuals — than were the Republicans. While they managed to match Republicans in soft-money contributions, they trailed badly in federally limited hard-money contributions. Hence, the abolition of soft money would put the Democrats at a severe disadvantage in presidential and Congressional elections.

In addition, the argument went, by increasing the amount an individual could give to a candidate from $1,000 to $2,000, the law would provide a big financial boost to President Bush, who would double the $100 million he raised in 2000 and vastly outspend his Democratic challenger. Finally, the ban on soft money would weaken the Democratic Party's get-out-the-vote efforts, particularly in minority communities, while the regulation of "issue ads" would remove a potent electoral weapon from the arsenal of labor unions, the party's most critical supporter.

After 18 months of experience under the law, the fundraising patterns in this year's election suggest that these concerns were greatly exaggerated. Money is flowing freely in the campaign, and many voices are being heard. The political parties have adapted well to an all-hard-money world and have suffered no decline in total revenues. And interest groups are playing a secondary role to that of the candidates and parties.

The financial position of the Democratic party is strikingly improved from what was imagined a year ago. Sen. John Kerry (D-Mass.), who opted out of public funding before the Iowa caucuses, will raise more than $200 million before he accepts his party's nomination in Boston. The unusual unity and energy in Democrats' ranks have fueled an extraordinary flood of small donations to the Kerry campaign, mainly over the Internet. These have been complemented by a series of successful events courting $1,000 and $2,000 donors.

Indeed, since Kerry emerged as the prospective nominee in March, he has raised more than twice as much as Bush and has matched the Bush campaign's unprecedented media buys in battleground states, while also profiting from tens of millions of dollars in broadcast ads run by independent groups that are operating largely outside the strictures of federal election law.

The Democratic national party committees have adjusted to the ban on soft money much more successfully than insiders had thought possible. Instead of relying on large soft-money gifts for half of their funding, Democrats have shown a renewed commitment to small donors and have relied on grassroots supporters to fill their campaign coffers. After the 2000 election, the Democratic National Committee had 400,000 direct-mail donors; today the committee has more than 1.5 million, and hundreds of thousands more who contribute over the Internet.

By the end of June, the three Democratic committees had already raised $230 million in hard money alone, compared to $227 million in hard and soft money combined at this point in the 2000 election cycle. They have demonstrated their ability to replace the soft money they received in previous elections with new contributions from individual donors.

Democrats are also showing financial momentum as the election nears, and thus have been gradually reducing the Republican financial advantage in both receipts and cash on hand. In 2003, Democrats trailed Republicans by a large margin, raising only $95 million, compared to $206 million for the GOP. But in the first quarter of this year, Democrats began to close the gap, raising $50 million, compared to $82 million for Republicans. In the most recent quarter, they narrowed the gap even further, raising $85 million, compared to the Republicans' $96 million.

Democrats are now certain to have ample funds for the fall campaigns. Although they had less than $20 million in the bank (minus debts) at the beginning of this year, they have now banked $92 million. In the past three months, Democrats actually beat Republicans in generating cash — $47 million, compared to $31 million for the GOP.

The party, therefore, has the means to finance a strong coordinated and/or independent-spending campaign on behalf of the presidential ticket, while Congressional committees have the resources they need to play in every competitive Senate and House race, thanks in part to the fundraising support they have received from Members of Congress.

Moreover, FEC reports through June confirm that Democratic candidates in those competitive Senate and House races are more than holding their own in fundraising. They will be aided by a number of Democratic-leaning groups that have committed substantial resources to identify and turn out Democratic voters on Election Day.

Democrats are highly motivated to defeat Bush and regain control of one or both houses of Congress. BCRA has not frustrated these efforts. Democrats are financially competitive with Republicans, which means the outcome will not be determined by a disparity of resources. Put simply, the doomsday scenario conjured up by critics of the new campaign finance law has not come to pass.

Publication: Roll Call
     
 
 




mo

Removing regulatory barriers to telehealth before and after COVID-19

Introduction A combination of escalating costs, an aging population, and rising chronic health-care conditions that account for 75% of the nation’s health-care costs paint a bleak picture of the current state of American health care.1 In 2018, national health expenditures grew to $3.6 trillion and accounted for 17.7% of GDP.2 Under current laws, national health…

       




mo

The Marketplace of Democracy : Electoral Competition and American Politics


Brookings Institution Press and Cato Institute 2006 312pp.

Since 1998, U.S. House incumbents have won a staggering 98 percent of their reelection races. Electoral competition is also low and in decline in most state and primary elections. The Marketplace of Democracy combines the resources of two eminent research organizations—the Brookings Institution and the Cato Institute—to address the startling lack of competition in our democratic system. The contributors consider the historical development, legal background, and political aspects of a system that is supposed to be responsive and accountable yet for many is becoming stagnant, self-perpetuating, and tone-deaf. How did we get to this point, and what—if anything—should be done about it?

In The Marketplace of Democracy, top-tier political scholars also investigate the perceived lack of competition in arenas only previously speculated on, such as state legislative contests and congressional primaries. Michael McDonald, John Samples, and their colleagues analyze previous reform efforts such as direct primaries and term limits, and the effects they have had on electoral competition. They also examine current reform efforts in redistricting and campaign finance regulation, as well as the impact of third parties. In sum, what does all this tell us about what might be done to increase electoral competition?

Elections are the vehicles through which Americans choose who governs them, and the power of the ballot enables ordinary citizens to keep public officials accountable. This volume considers different policy options for increasing the competition needed to keep American politics vibrant, responsive, and democratic.


Brookings Forum: "The Marketplace of Democracy: A Groundbreaking Survey Explores Voter Attitudes About Electoral Competition and American Politics," October 27, 2006.

Podcast: "The Marketplace of Democracy: Electoral Competition and American Politics," a Capitol Hill briefing featuring Michael McDonald and John Samples, September 22, 2006.


Contributors: Stephen Ansolabehere (Massachusetts Institute of Technology), William D. Berry (Florida State University), Bruce Cain (University of California-Berkeley), Thomas M. Carsey (Florida State University), James G. Gimpel (University of Maryland), Tim Groseclose (University of California-Los Angeles), John Hanley (University of California-Berkeley), John mark Hansen (University of Chicago), Paul S. Herrnson (University of Maryland), Shigeo Hirano (Columbia University), Gary C. Jacobson (University of California-San Diego), Thad Kousser (University of California-San Diego), Frances E. Lee (University of Maryland), John C. Matsusaka (University of Southern California), Kenneth R. Mayer (University of Wisconsin-Madison), Michael P. McDonald (Brookings Institution and George Mason University), Jeffrey Milyo (University of Missouri-Columbia), Richard G. Niemi (University of Rochester), Natheniel Persily (University of Pennsylvania Law School), Lynda W. Powell (University of Rochester), David Primo (University of Rochester), John Samples (Cato Institute), James M. Snyder Jr. (Massachusetts Institute of Technology), Timothy Werner (University of Wisconsin-Madison), and Amanda Williams (University of Wisconsin-Madison).

ABOUT THE EDITORS

John Samples
John Samples directs the Center for Representative Government at the Cato Institute and teaches political science at Johns Hopkins University.
Michael P. McDonald

Downloads

Ordering Information:
  • {9ABF977A-E4A6-41C8-B030-0FD655E07DBF}, 978-0-8157-5579-1, $24.95 Add to Cart
  • {CD2E3D28-0096-4D03-B2DE-6567EB62AD1E}, 978-0-8157-5580-7, $54.95 Add to Cart
     
 
 




mo

The Marketplace of Democracy: A Groundbreaking Survey Explores Voter Attitudes About Electoral Competition and American Politics

Event Information

October 27, 2006
10:00 AM - 12:00 PM EDT

Falk Auditorium
The Brookings Institution
1775 Massachusetts Ave., NW
Washington, DC

Register for the Event

Despite the attention on the mid-term races, few elections are competitive. Electoral competition, already low at the national level, is in decline in state and primary elections as well. Reformers, who point to gerrymandering and a host of other targets for change, argue that improving competition will produce voters who are more interested in elections, better-informed on issues, and more likely to turn out to the polls.

On October 27, the Brookings Institution—in conjunction with the Cato Institute and The Pew Research Center—presented a discussion and a groundbreaking survey exploring the attitudes and opinions of voters in competitive and noncompetitive congressional districts. The survey, part of Pew's regular polling on voter attitudes, was conducted through the weekend of October 21. A series of questions explored the public's perceptions, knowledge, and opinions about electoral competitiveness.

The discussion also explored a publication that addresses the startling lack of competition in our democratic system. The Marketplace of Democracy: Electoral Competition and American Politics (Brookings, 2006), considers the historical development, legal background, and political aspects of a system that is supposed to be responsive and accountable, yet for many is becoming stagnant, self-perpetuating, and tone-deaf. Michael McDonald, editor and Brookings visiting fellow, moderated a discussion among co-editor John Samples, director of the Center for Representative Government at the Cato Institute, and Andrew Kohut and Scott Keeter from The Pew Research Center, who also discussed the survey.

Transcript

Event Materials

     
 
 




mo

The Revenge of the Moderates in U.S. Politics


Alaska Republican Sen. Lisa Murkowski’s write-in candidacy for reelection makes her the latest to join a growing number of prominent politicians who have shed political affiliations in the hopes of winning public office.

Florida Gov. Charlie Crist is running as an independent for the Senate, former Sen. Lincoln Chafee is running as an independent for Rhode Island governor, Mayor Michael Bloomberg became an independent to run New York City, and, of course, Sen. Joe Lieberman lost the 2006 Democratic Senate primary — but won in the general as an independent.

The trend of moderate independent candidates who have forsworn party affiliations is not new to U.S. politics. Since the Civil War, when the modern Republican Party was established to compete against the Democratic Party, minor party or unaffiliated candidates have won election to the House or Senate a total of 697 times. Of these, 89 percent of elected minor party candidates had voting records ideologically between the two major parties.

Despite the recent polarization of U.S. politics, history tells us that moderates make winners. Consider the Wisconsin Progressive Party. Its development has a familiar ring to today’s politics. Extremist elements flourished in the Republican Party during the Great Depression, growing out of our nation’s economic anxieties. GOP moderates responded by creating this Wisconsin group, focused on issues of reform and pragmatic governance.

It started when Wisconsin Gov. Philip La Follette ran for reelection in 1932 as the GOP nominee. He was heckled throughout his speeches by Republican ‘Stalwarts’ on his political right. They “had their Phil” and were angered by his policies of perceived higher taxes to support government spending. La Follette lost the Republican primary to Stalwart-backed Walter Kohler amid then-record turnout. Kohler lost to the Democrat in the general election.

La Follette is a famous political name. Gov. Philip La Follette and Sen. Robert La Follette Jr. were sons of the leading GOP politician, Sen. Robert La Follette Sr. Republican progressives had supported him for the party’s presidential nomination in 1912 and 1916. He eventually ran for president in 1924 — on his own Independent Progressive Party ticket. But while the father’s exploits are well-known, his sons’ reactions to Wisconsin’s political climate are more relevant to today’s politics.

Frustrated by the GOP extremists, the La Follette brothers created the Wisconsin Progressive Party, and they ran as party candidates when successfully elected governor and senator in 1934. Today’s independent candidates share a similar frustration with the ideological purists on their right and left. The extremists in the Democratic and Republican primary electorates are rejecting centrist candidates who might be better positioned to win general elections.

Consider the words of Crist when he declared his Independent candidacy. “If you want somebody on the right or you want somebody on the left,” Crist said, “you have the former speaker, Rubio, or the congressman, Meek. If you want somebody who has common sense, who puts the will of the people first, who wants to fight for the people first, now you've got Charlie Crist. You have a choice.”

With all the attention paid to the successes of Tea Party activists during the GOP primaries, it is easy to forget that these are not like general elections. Primary voters tend to be more ideologically extreme. So these Republican primary voters may end up denying the party several general election victories.

For example, many political observers agree that Rep. Mike Castle (R-Del.), a moderate, would have been a stronger candidate for Senate than the GOP primary victor, Christine O’Donnell, his tea party-backed opponent. General elections have traditionally been won in the center -- where most voters still reside.

Minor party successes usually arise when the two major political parties become ideologically polarized. Moderates can usually find a seat under a big tent, but when party activists are unable to tolerate dissent, moderates are shut out and left to their own devices. So it isn’t surprising that strong candidates holding moderate positions realize they are electorally viable by abandoning their party and appealing to the center in general elections.

History tells us that conditions now are favorable for moderates like Chafee, Crist, Lieberman, and Murkowski. They step into a political vacuum at the center that the major parties created by moving to the political extremes. With room left for further polarization, this may be just the beginning of the rise of moderate independent candidates.

History also tells us the political party that first figures out how to recapture the middle -- and bring these candidates and their supporters into the fold -- is the one most likely to emerge as dominant.

Authors

Publication: POLITICO
Image Source: © Jessica Rinaldi / Reuters
      
 
 




mo

Reviving Faith in Democracy

In a new book, What Democracy is For: On Freedom and Moral Government (Princeton University Press, 2007), Stein Ringen points out the failure of the world's democracies, most specifically the United States and Britain, to live up to their own founding ideological values and expectations. Ringen, professor of Sociology and Social Policy at the University…

       




mo

The benefits of a knives-out Democratic debate

Stop whining about Democrats criticizing each other. The idea that Democrats attacking Democrats is a risk and an avenue that will deliver reelection to Donald Trump is nonsense. Democrats must attack each other and attack each other aggressively. Vetting presidential candidates, highlighting their weaknesses and the gaps in their record is essential to building a…

       




mo

In administering the COVID-19 stimulus, the president’s role model should be Joe Biden

As America plunges into recession, Congress and President Donald Trump have approved a series of aid packages to assist businesses, the unemployed, and others impacted by COVID-19. The first three aid packages will likely be supplemented by at least a fourth package, as the nation’s leaders better understand the depth and reach of the economic…

       




mo

With Sanders out, what’s next for the Democratic presidential race?

Following the withdrawal of Sen. Bernie Sanders from the 2020 presidential race, the Democrats' presumptive nominee for president will be former Vice President Joe Biden. Senior Fellow John Hudak examines how Sanders and other progressives have shifted mainstream Democratic positions, and the repercussions for the Democratic convention in August. He also looks at the leadership…

       




mo

Policy insights from comparing carbon pricing modeling scenarios

Carbon pricing is an important policy tool for reducing greenhouse gas pollution. The Stanford Energy Modeling Forum exercise 32 convened eleven modeling teams to project emissions, energy, and economic outcomes of an illustrative range of economy-wide carbon price policies. The study compared a coordinated reference scenario involving no new policies with policy scenarios that impose…

       




mo

My Climate Journey podcast episode 17: Adele Morris

       




mo

The Neoliberal Podcast: Carbon Taxes ft. Adele Morris, David Hart & Philippe Benoit

       




mo

Adele Morris on BPEA and looking outside macroeconomics

Adele Morris is a senior fellow in Economic Studies and policy director for Climate and Energy Economics at Brookings. She recently served as a discussant for a paper as part of the Spring 2019 BPEA conference.Her research informs critical decisions related to climate change, energy, and tax policy. She is a leading global expert on the design…

       




mo

A systematic review of systems dynamics and agent-based obesity models: Evaluating obesity as part of the global syndemic

       




mo

Modeling community efforts to reduce childhood obesity

Why childhood obesity matters According to the latest data, childhood obesity affects nearly 1 in 5 children in the United States, a number which has more than tripled since the early 1970s. Children who have obesity are at a higher risk of many immediate health risks such as high blood pressure and high cholesterol, type…

       




mo

Development of a computational modeling laboratory for examining tobacco control policies: Tobacco Town

       




mo

Why did Egyptian democratization fail?

       




mo

France's pivot to Asia: It's more than just submarines


Editors’ Note: Since President François Hollande’s 2012 election, France has launched an Asia-wide initiative in an attempt to halt declining trade figures and improve its overall leverage with the region, write Philippe Le Corre and Michael O’Hanlon. This piece originally appeared on The National Interest.

On April 26, France’s defense shipbuilding company DCNS secured a victory in winning, against Japan and Germany, a long-awaited $40 billion Australian submarine deal. It may not come as a surprise to anyone who has been following France’s growing interest in the Asia-Pacific for the past five years. Since President François Hollande’s 2012 election, the country has launched an Asia-wide initiative in an attempt to halt declining trade figures and improve its overall leverage with the region.

Visiting New Caledonia last weekend, Prime Minister Manuel Valls immediately decided on the spot to fly to Australia to celebrate the submarine news. Having been at odds in the 1990s over France’s decision to test its nuclear weapon capacities on an isolated Pacific island, Paris and Canberra have begun a close partnership over the last decade, culminating in the decision by Australia’s Prime Minister Malcolm Turnbull, in power since September 2015.

Unlike its Japanese competitor Mitsubishi Heavy Industries (MHI), DCNS promised to build the submarine main parts on Australian soil, creating 2,900 jobs in the Adelaide area. The French also secured support from U.S. defense contractors Lockheed Martin and Raytheon, one of which will eventually build the twelve shortfin Barracuda submarines’ combat systems. Meanwhile, this unexpected victory, in light of the close strategic relationship between Australia and Japan, has shed light on France’s sustained ambitions in the Asia-Pacific region. Thanks to its overseas territories of New Caledonia, Wallis and Futuna, French Polynesia and Clipperton Island, France has the world’s second-largest maritime domain. It is also part of QUAD, the Quadrilateral Defence Coordination Group that also includes the United States, Australia and New Zealand, and which coordinates security efforts in the Pacific, particularly in the maritime domain, by supporting island states to robustly and sustainably manage their natural resources, including fisheries.

France is also attempting to correct an excessive focus on China by developing new ties with India, Japan, South Korea and Southeast Asian countries, which have all received a number of French ministerial visits. France’s overseas territories also include a presence in the southern part of the Indian Ocean, with the islands of Mayotte, Réunion and the Scattered Islands, and French Southern and Antarctic Territories, as well as the northwest region of the Indian Ocean through its permanent military presence in the United Arab Emirates and Djibouti. Altogether these presences encompass one million French citizens. This sets France apart from its fellow EU member states regarding defense and security in the Asia-Pacific, particularly as France is a top supplier of military equipment to several Asian countries including Singapore, Malaysia, India and Australia. Between 2008 and 2012, Asian nations accounted for 28 percent of French defense equipment sales, versus 12 percent during 1998–2002. (More broadly, 70 percent of European containerized merchandise trade transits through the Indian Ocean.)

Despite its unique position, France is also supportive of a joint European Union policy toward the region, especially when it comes to developments in the South China Sea. Last March, with support from Paris, Berlin, London and other members, Federica Mogherini, the EU’s High representative for Foreign Affairs and Security Policy, issued a statement criticizing China’s actions:

“The EU is committed to maintaining a legal order for the seas and oceans based upon the principles of international law, as reflected notably in the United Nations Convention on the Law of the Sea (UNCLOS). This includes the maintenance of maritime safety, security, and cooperation, freedom of navigation and overflight. While not taking a position on claims to land territory and maritime space in the South China Sea, the EU urges all claimants to resolve disputes through peaceful means, to clarify the basis of their claims, and to pursue them in accordance with international law including UNCLOS and its arbitration procedures.”

This does not mean that France is neglecting its “global partnership” with China. In 2014, the two countries celebrated fifty years of diplomatic relations; both governments conduct annual bilateral dialogues on international and security issues. But as a key EU state, a permanent member of the UN Security Council and a significant contributor to the Asia-Pacific’s security, France has launched a multidimensional Asia policy.

All of this should be seen as welcome news by Washington. While there would have been advantages to any of the three worthy bids, a greater French role in the Asia-Pacific should be beneficial. At this crucial historical moment in China's rise and the region's broader blossoming, the United States needs a strong and engaged European partnership to encourage Beijing in the right direction and push back together when that does not occur. Acting in concert with some of the world's other major democracies can add further legitimacy to America's actions to uphold the international order in the Asia-Pacific. To be sure, Japan, South Korea and Australia are key U.S. partners here and will remain so. But each also has its own limitations (and in Japan's case, a great deal of historical baggage in dealing with China).

European states are already heavily involved in economic interactions with China. The submarine decision will help ensure a broader European role that includes a hard-headed perspective on security trends as well.

Publication: The National Interest
     
 
 




mo

Desert Storm after 25 years: Confronting the exposures of modern warfare


Event Information

June 16, 2016
3:00 PM - 5:00 PM EDT

SEIU Building
1800 Massachusetts Ave. NW
Washington, DC

Register for the Event

By most metrics, the 1991 Gulf War, also known as Operation Desert Storm, was a huge and rapid success for the United States and its allies. The mission of defeating Iraq's army, which invaded Kuwait the year prior, was done swiftly and decisively. However, the war's impact on soldiers who fought in it was lasting. Over 650,000 American men and women served in the conflict, and many came home with symptoms including insomnia, respiratory disorders, memory issues and others attributed to a variety of exposures – “Gulf War Illness."

On June 16, the Center for 21st Century Security and Intelligence at Brookings and Georgetown University Medical Center co-hosted a discussion on Desert Storm, its veterans, and how they are faring today. Representative Mike Coffman (R-Col.), the only member of Congress to serve in both Gulf wars, delivered an opening address before joining Michael O’Hanlon, senior fellow at Brookings, for a moderated discussion. Joel Kupersmith, former head of the Office of Research and Development of the Department of Veterans Affairs, convened a follow-on panel with Carolyn Clancy, deputy under secretary for health for organizational excellence at the Department of Veterans Affairs; Adrian Atizado, deputy national legislative director at Disabled American Veterans; and James Baraniuk, professor of medicine at Georgetown University Medical Center.

Audio

Transcript

Event Materials

     
 
 




mo

Is India getting right mix of fiscal & monetary policy?

       




mo

If you can’t keep hackers out, find and remove them faster

In the wake of recent intrusions into government systems, it is difficult to identify anyone who believes defenders have the advantage in cyberspace. Digital adversaries seem to achieve their objectives at will, spending months inside target networks before someone, usually a third party, discovers the breach. Following the announcement, managers and stakeholders commit to improving…

       




mo

A systematic review of systems dynamics and agent-based obesity models: Evaluating obesity as part of the global syndemic

       




mo

Modeling community efforts to reduce childhood obesity

Why childhood obesity matters According to the latest data, childhood obesity affects nearly 1 in 5 children in the United States, a number which has more than tripled since the early 1970s. Children who have obesity are at a higher risk of many immediate health risks such as high blood pressure and high cholesterol, type…

       




mo

Development of a computational modeling laboratory for examining tobacco control policies: Tobacco Town

       




mo

A modern tragedy? COVID-19 and US-China relations

Executive Summary This policy brief invokes the standards of ancient Greek drama to analyze the COVID-19 pandemic as a potential tragedy in U.S.-China relations and a potential tragedy for the world. The nature of the two countries’ political realities in 2020 have led to initial mismanagement of the crisis on both sides of the Pacific.…

       




mo

Moving to Opportunity: What’s next?

In 1992, the U.S. Department of Housing and Urban Development partnered with five public housing authorities to launch Moving to Opportunity ⁠— a 10-year fair housing experiment to help low income families find housing in low-poverty areas. They hoped to test what many people already suspected: different neighborhoods affect opportunity in different ways. The results…

       




mo

Congressional Testimony: Cross-Strait Economic and Political Issues

Cross-Strait relations have marked a path of reduced tension and increasing cooperation after the election of President Ma Ying-jeou of the ruling Chinese Nationalist Party (KMT) in 2008. Taiwan’s efforts to institutionalize its engagement with the People’s Republic of China (PRC), particularly in trade and investment activities, presents both opportunities and challenges on both sides…

       




mo

Moving to Access: Is the current transport model broken?

For several generations, urban transportation policymakers and practitioners around the world favored a “mobility” approach, aimed at moving people and vehicles as fast as possible by reducing congestion. The limits of such an approach, however, have become more apparent over time, as residents struggle to reach workplaces, schools, hospitals, shopping, and numerous other destinations in…

       




mo

Democracy, the China challenge, and the 2020 elections in Taiwan

The people of Taiwan should be proud of their success in consolidating democracy over recent decades. Taiwan enjoys a vibrant civil society, a flourishing media, individual liberties, and an independent judiciary that is capable of serving as a check on abuses of power. Taiwan voters have ushered in three peaceful transfers of power between major…

       




mo

Why Bridgegate proves we need fewer hacks, machines, and back room deals, not more


I had been mulling a rebuttal to my colleague and friend Jon Rauch’s interesting—but wrong—new Brookings paper praising the role of “hacks, machines, big money, and back room deals” in democracy. I thought the indictments of Chris Christie’s associates last week provided a perfect example of the dangers of all of that, and so of why Jon was incorrect. But in yesterday’s L.A. Times, he beat me to it, himself defending the political morality (if not the efficacy) of their actions, and in the process delivering a knockout blow to his own position.

Bridgegate is a perfect example of why we need fewer "hacks, machines, big money, and back room deals" in our politics, not more. There is no justification whatsoever for government officials abusing their powers, stopping emergency vehicles and risking lives, making kids late for school and parents late for their jobs to retaliate against a mayor who withholds an election endorsement. We vote in our democracy to make government work, not break. We expect that officials will serve the public, not their personal interests. This conduct weakens our democracy, not strengthens it.

It is also incorrect that, as Jon suggests, reformers and transparency advocates are, in part, to blame for the gridlock that sometimes afflicts our American government at every level. As my co-authors and I demonstrated at some length in our recent Brookings paper, “Why Critics of Transparency Are Wrong,” and in our follow-up Op-Ed in the Washington Post, reform and transparency efforts are no more responsible for the current dysfunction in our democracy than they were for the gridlock in Fort Lee. Indeed, in both cases, “hacks, machines, big money, and back room deals” are a major cause of the dysfunction. The vicious cycle of special interests, campaign contributions and secrecy too often freeze our system into stasis, both on a grand scale, when special interests block needed legislation, and on a petty scale, as in Fort Lee. The power of megadonors has, for example, made dysfunction within the House Republican Caucus worse, not better.

Others will undoubtedly address Jon’s new paper at length. But one other point is worth noting now. As in foreign policy discussions, I don’t think Jon’s position merits the mantle of political “realism,” as if those who want democracy to be more democratic and less corrupt are fluffy-headed dreamers. It is the reformers who are the true realists. My co-authors and I in our paper stressed the importance of striking realistic, hard-headed balances, e.g. in discussing our non-absolutist approach to transparency; alas, Jon gives that the back of his hand, acknowledging our approach but discarding the substance to criticize our rhetoric as “radiat[ing] uncompromising moralism.” As Bridgegate shows, the reform movement’s “moralism" correctly recognizes the corrupting nature of power, and accordingly advocates reasonable checks and balances. That is what I call realism. So I will race Jon to the trademark office for who really deserves the title of realist!

Authors

Image Source: © Andrew Kelly / Reuters
      




mo

More Czech governance leaders visit Brookings


I had the pleasure earlier this month of welcoming my friend, Czech Republic Foreign Minister Lubomir Zaoralek, here to Brookings for a discussion of critical issues confronting the Europe-U.S. alliance. Foreign Minister Zaoralek was appointed to his current position in January 2014 after serving as a leading figure in the Czech Parliament for many years. He was accompanied by a distinguished delegation that included Dr. Petr Drulak of the Foreign Ministry, and Czech Ambassador Petr Gandalovic. I was fortunate enough to be joined in the discussion by colleagues from Brookings including Fiona Hill, Shadi Hamid, Steve Pifer, and others, as well as representatives of other D.C. think tanks. Our discussion spanned the globe, from how to respond to the Syrian conflict, to addressing Russia’s conduct in Ukraine, to the thaw in U.S.-Cuba relations, to dealing with the refugee crisis in Europe. The conversation was so fascinating that the sixty minutes we had allotted flew by and we ended up talking for two hours—and we still just scratched the surface.

Amb. Eisen and FM Zaoralek, October 2, 2015

Yesterday, we had a visit from Czech State Secretary Tomas Prouza, accompanied by Ambassador Martin Povejsil, the Czech Permanent Envoy to the EU. We also talked about world affairs. In this case, that included perhaps the most important governance matter now confronting the U.S.: the exceptionally entertaining (if not enlightening) presidential primary season. I expressed my opinion that Vice President Biden would not enter the race, only to have him prove me right in his Rose Garden remarks a few hours later. If only all my predictions came true (and as quickly). We at Brookings benefited greatly from the insights of both of these October delegations, and we look forward to welcoming many more from every part of the Czech political spectrum in the months ahead.

Prouza, Eisen, Povejsil, October 21, 2015

Authors

Image Source: © Gary Hershorn / Reuters
       




mo

More solutions from the campaign finance summit


We have received many emails and calls in response to our blog last week about our campaign finance reform “Solutions Summit," so we thought we would share some pictures and quotes from the event. Also, Issue One’s Nick Penniman and I just co-authored an op-ed highlighting the themes of the event, which you can find here.

Ann Ravel, Commissioner of the Federal Election Commission and the outgoing Chairwoman kicked us off as our luncheon speaker. She noted that, “campaign finance issues [will] only be addressed when there is a scandal. The truth is, that campaign finance today is a scandal.”

    

(L-R, Ann Ravel, Trevor Potter, Peter Schweizer, Timothy Roemer)

Commenting on Ann’s remarks from a conservative perspective, Peter Schweizer, the President of the Government Accountability Institute, noted that, “increasingly today the problem is more one of extortion, that the challenge not so much from businesses that are trying to influence politicians, although that certainly happens, but that businesses feel and are targeted by politicians in the search for cash.” That’s Trevor Potter, who introduced Ann, to Peter’s left.

Kicking off the first panel, a deep dive into the elements of the campaign finance crisis, was Tim Roemer, former Ambassador to India (2009-2011), Member of the U.S. House of Representatives, (D-IN, 1991-2003) Member of the 9/11 Commission and Senior Strategic Advisor to Issue One. He explained that “This is not a red state problem. It’s not a blue state problem. Across the heartland, across America, the Left, the Right, the Democrats, the Republicans, Independents, we all need to work together to fix this.”

(L-R, Fred Wertheimer, John Bonifaz, Dan Wolf, Roger Katz, Allen Loughry, Cheri Beasley, Norman Eisen)

Our second panel addressed solutions at the federal and state level.  Here, Fred Wertheimer, the founder and President of Democracy 21 is saying that, “We are going to have major scandals again and we are going to have opportunities for major reforms. With this corrupt campaign finance system it is only a matter of time before the scandals really break out. The American people are clearly ready for a change. The largest national reform movement in decades now exists and it’s growing rapidly.”

Our third and final panel explained why the time for reform is now. John Sarbanes, Member of the U.S. House of Representatives (D-MD) argued that fixes are in political reach. He explains, “If we can build on the way people feel about [what] they’re passionate on and lead them that way to this need for reform, then we’re going to build the kind of broad, deep coalition that will achieve success ultimately.”

 

(L-R in each photo, John Sarbanes, Claudine Schneider, Zephyr Teachout)

Reinforcing John’s remarks, Claudine Schneider, Member of the U.S. House of Representatives (R-RI, 1981-1991) pointed out that “we need to keep pounding the media with letters to the editor, with editorial press conferences, with broad spectrum of media strategies where we can get the attention of the masses. Because once the masses rise up, I believe that’s when were really going to get the change, from the bottom up and the top down.”

Grace Abiera contributed to this post.

Authors

       




mo

Can the Department of Veterans Affairs be modernized?


Event Information

June 20, 2016
2:00 PM - 3:00 PM EDT

Falk Auditorium
Brookings Institution
1775 Massachusetts Avenue NW
Washington, DC 20036

Register for the Event
A conversation with VA Secretary Robert McDonald

This program was aired live on CSPAN.org » 



With the demand for its services constantly evolving, the Department of Veterans Affairs (VA) faces complex challenges in providing accessible care to America’s veterans. Amidst a history of long patient wait times, cost overruns, and management concerns, the VA recently conducted a sweeping internal review of its operations.  The result was the new MyVA program.

How will MyVA improve the VA’s care of veterans? What will it do restore public confidence in its efforts? What changes is the VA undergoing to address both internal concerns and modern challenges in veteran care? 

On June 20, Governance Studies at Brookings hosted VA Secretary Robert McDonald. Secretary McDonald described the VA’s transformation strategy and explained how the reforms within MyVA will impact veterans, taxpayers and other stakeholders. He addressed lessons learned not just for the VA but for all government agencies that strive to achieve transformation and improve service delivery.

This event was broadcast live on C-SPAN.

Join the conversation on Twitter at #VASec and @BrookingsGov

Audio

Transcript

Event Materials

       




mo

Most business incentives don’t work. Here’s how to fix them.

In 2017, the state of Wisconsin agreed to provide $4 billion in state and local tax incentives to the electronics manufacturing giant Foxconn. In return, the Taiwan-based company promised to build a new manufacturing plant in the state for flat-screen television displays and the subsequent creation of 13,000 new jobs. It didn’t happen. Those 13,000…

       




mo

High Achievers, Tracking, and the Common Core


A curriculum controversy is roiling schools in the San Francisco Bay Area.  In the past few months, parents in the San Mateo-Foster City School District, located just south of San Francisco International Airport, voiced concerns over changes to the middle school math program. The changes were brought about by the Common Core State Standards (CCSS).  Under previous policies, most eighth graders in the district took algebra I.  Some very sharp math students, who had already completed algebra I in seventh grade, took geometry in eighth grade. The new CCSS-aligned math program will reduce eighth grade enrollments in algebra I and eliminate geometry altogether as a middle school course. 

A little background information will clarify the controversy.  Eighth grade mathematics may be the single grade-subject combination most profoundly affected by the CCSS.  In California, the push for most students to complete algebra I by the end of eighth grade has been a centerpiece of state policy, as it has been in several states influenced by the “Algebra for All” movement that began in the 1990s.  Nationwide, in 1990, about 16 percent of all eighth graders reported that they were taking an algebra or geometry course.  In 2013, the number was three times larger, and nearly half of all eighth graders (48 percent) were taking algebra or geometry.[i]  When that percentage goes down, as it is sure to under the CCSS, what happens to high achieving math students?

The parents who are expressing the most concern have kids who excel at math.  One parent in San Mateo-Foster City told The San Mateo Daily Journal, “This is really holding the advanced kids back.”[ii] The CCSS math standards recommend a single math course for seventh grade, integrating several math topics, followed by a similarly integrated math course in eighth grade.  Algebra I won’t be offered until ninth grade.  The San Mateo-Foster City School District decided to adopt a “three years into two” accelerated option.  This strategy is suggested on the Common Core website as an option that districts may consider for advanced students.  It combines the curriculum from grades seven through nine (including algebra I) into a two year offering that students can take in seventh and eighth grades.[iii]  The district will also provide—at one school site—a sequence beginning in sixth grade that compacts four years of math into three.  Both accelerated options culminate in the completion of algebra I in eighth grade.

The San Mateo-Foster City School District is home to many well-educated, high-powered professionals who work in Silicon Valley.  They are unrelentingly liberal in their politics.  Equity is a value they hold dear.[iv]  They also know that completing at least one high school math course in middle school is essential for students who wish to take AP Calculus in their senior year of high school.  As CCSS is implemented across the nation, administrators in districts with demographic profiles similar to San Mateo-Foster City will face parents of mathematically precocious kids asking whether the “common” in Common Core mandates that all students take the same math course.  Many of those districts will respond to their constituents and provide accelerated pathways (“pathway” is CCSS jargon for course sequence). 

But other districts will not.  Data show that urban schools, schools with large numbers of black and Hispanic students, and schools located in impoverished neighborhoods are reluctant to differentiate curriculum.  It is unlikely that gifted math students in those districts will be offered an accelerated option under CCSS.  The reason why can be summed up in one word: tracking.

Tracking in eighth grade math means providing different courses to students based on their prior math achievement.  The term “tracking” has been stigmatized, coming under fire for being inequitable.  Historically, where tracking existed, black, Hispanic, and disadvantaged students were often underrepresented in high-level math classes; white, Asian, and middle-class students were often over-represented.  An anti-tracking movement gained a full head of steam in the 1980s.  Tracking reformers knew that persuading high schools to de-track was hopeless.  Consequently, tracking’s critics focused reform efforts on middle schools, urging that they group students heterogeneously with all students studying a common curriculum.  That approach took hold in urban districts, but not in the suburbs.

Now the Common Core and de-tracking are linked.  Providing an accelerated math track for high achievers has become a flashpoint throughout the San Francisco Bay Area.  An October 2014 article in The San Jose Mercury News named Palo Alto, Saratoga, Cupertino, Pleasanton, and Los Gatos as districts that have announced, in response to parent pressure, that they are maintaining an accelerated math track in middle schools.  These are high-achieving, suburban districts.  Los Gatos parents took to the internet with a petition drive when a rumor spread that advanced courses would end.  Ed Source reports that 900 parents signed a petition opposing the move and board meetings on the issue were packed with opponents. The accelerated track was kept.  Piedmont established a single track for everyone, but allowed parents to apply for an accelerated option.  About twenty five percent did so.  The Mercury News story underscores the demographic pattern that is unfolding and asks whether CCSS “could cement a two-tier system, with accelerated math being the norm in wealthy areas and the exception elsewhere.”

What is CCSS’s real role here?  Does the Common Core take an explicit stand on tracking?  Not really.  But de-tracking advocates can interpret the “common” in Common Core as license to eliminate accelerated tracks for high achievers.  As a noted CCSS supporter (and tracking critic), William H. Schmidt, has stated, “By insisting on common content for all students at each grade level and in every community, the Common Core mathematics standards are in direct conflict with the concept of tracking.”[v]  Thus, tracking joins other controversial curricular ideas—e.g., integrated math courses instead of courses organized by content domains such as algebra and geometry; an emphasis on “deep,” conceptual mathematics over learning procedures and basic skills—as “dog whistles” embedded in the Common Core.  Controversial positions aren’t explicitly stated, but they can be heard by those who want to hear them.    

CCSS doesn’t have to take an outright stand on these debates in order to have an effect on policy.  For the practical questions that local grouping policies resolve—who takes what courses and when do they take them—CCSS wipes the slate clean.  There are plenty of people ready to write on that blank slate, particularly administrators frustrated by unsuccessful efforts to de-track in the past

Suburban parents are mobilized in defense of accelerated options for advantaged students.  What about kids who are outstanding math students but also happen to be poor, black, or Hispanic?  What happens to them, especially if they attend schools in which the top institutional concern is meeting the needs of kids functioning several years below grade level?  I presented a paper on this question at a December 2014 conference held by the Fordham Institute in Washington, DC.  I proposed a pilot program of “tracking for equity.”  By that term, I mean offering black, Hispanic, and poor high achievers the same opportunity that the suburban districts in the Bay Area are offering.  High achieving middle school students in poor neighborhoods would be able to take three years of math in two years and proceed on a path toward AP Calculus as high school seniors.

It is true that tracking must be done carefully.  Tracking can be conducted unfairly and has been used unjustly in the past.  One of the worst consequences of earlier forms of tracking was that low-skilled students were tracked into dead end courses that did nothing to help them academically.  These low-skilled students were disproportionately from disadvantaged communities or communities of color.  That’s not a danger in the proposal I am making.  The default curriculum, the one every student would take if not taking the advanced track, would be the Common Core.  If that’s a dead end for low achievers, Common Core supporters need to start being more honest in how they are selling the CCSS.  Moreover, to ensure that the policy gets to the students for whom it is intended, I have proposed running the pilot program in schools predominantly populated by poor, black, or Hispanic students.  The pilot won’t promote segregation within schools because the sad reality is that participating schools are already segregated.

Since I presented the paper, I have privately received negative feedback from both Algebra for All advocates and Common Core supporters.  That’s disappointing.  Because of their animus toward tracking, some critics seem to support a severe policy swing from Algebra for All, which was pursued for equity, to Algebra for None, which will be pursued for equity.  It’s as if either everyone or no one should be allowed to take algebra in eighth grade.  The argument is that allowing only some eighth graders to enroll in algebra is elitist, even if the students in question are poor students of color who are prepared for the course and likely to benefit from taking it.

The controversy raises crucial questions about the Common Core.  What’s common in the common core?  Is it the curriculum?  And does that mean the same curriculum for all?  Will CCSS serve as a curricular floor, ensuring all students are exposed to a common body of knowledge and skills?  Or will it serve as a ceiling, limiting the progress of bright students so that their achievement looks more like that of their peers?  These questions will be answered differently in different communities, and as they are, the inequities that Common Core supporters think they’re addressing may surface again in a profound form.   



[i] Loveless, T. (2008). The 2008 Brown Center Report on American Education. Retrieved from http://www.brookings.edu/research/reports/2009/02/25-education-loveless. For San Mateo-Foster City’s sequence of math courses, see: page 10 of http://smfc-ca.schoolloop.com/file/1383373423032/1229222942231/1242346905166154769.pdf 

[ii] Swartz, A. (2014, November 22). “Parents worry over losing advanced math classes: San Mateo-Foster City Elementary School District revamps offerings because of Common Core.” San Mateo Daily Journal. Retrieved from http://www.smdailyjournal.com/articles/lnews/2014-11-22/parents-worry-over-losing-advanced-math-classes-san-mateo-foster-city-elementary-school-district-revamps-offerings-because-of-common-core/1776425133822.html

[iii] Swartz, A. (2014, December 26). “Changing Classes Concern for parents, teachers: Administrators say Common Core Standards Reason for Modifications.” San Mateo Daily Journal. Retrieved from http://www.smdailyjournal.com/articles/lnews/2014-12-26/changing-classes-concern-for-parents-teachers-administrators-say-common-core-standards-reason-for-modifications/1776425135624.html

[iv] In the 2014 election, Jerry Brown (D) took 75% of Foster City’s votes for governor.  In the 2012 presidential election, Barak Obama received 71% of the vote. http://www.city-data.com/city/Foster-City-California.html

[v] Schmidt, W.H. and Burroughs, N.A. (2012) “How the Common Core Boosts Quality and Equality.” Educational Leadership, December 2012/January 2013. Vol. 70, No. 4, pp. 54-58.

Authors

     
 
 




mo

Measuring effects of the Common Core


Part II of the 2015 Brown Center Report on American Education

Over the next several years, policy analysts will evaluate the impact of the Common Core State Standards (CCSS) on U.S. education.  The task promises to be challenging.  The question most analysts will focus on is whether the CCSS is good or bad policy.  This section of the Brown Center Report (BCR) tackles a set of seemingly innocuous questions compared to the hot-button question of whether Common Core is wise or foolish.  The questions all have to do with when Common Core actually started, or more precisely, when the Common Core started having an effect on student learning.  And if it hasn’t yet had an effect, how will we know that CCSS has started to influence student achievement? 

The analysis below probes this issue empirically, hopefully persuading readers that deciding when a policy begins is elemental to evaluating its effects.  The question of a policy’s starting point is not always easy to answer.  Yet the answer has consequences.  You can’t figure out whether a policy worked or not unless you know when it began.[i] 

The analysis uses surveys of state implementation to model different CCSS starting points for states and produces a second early report card on how CCSS is doing.  The first report card, focusing on math, was presented in last year’s BCR.  The current study updates state implementation ratings that were presented in that report and extends the analysis to achievement in reading.  The goal is not only to estimate CCSS’s early impact, but also to lay out a fair approach for establishing when the Common Core’s impact began—and to do it now before data are generated that either critics or supporters can use to bolster their arguments.  The experience of No Child Left Behind (NCLB) illustrates this necessity.

Background

After the 2008 National Assessment of Educational Progress (NAEP) scores were released, former Secretary of Education Margaret Spellings claimed that the new scores showed “we are on the right track.”[ii] She pointed out that NAEP gains in the previous decade, 1999-2009, were much larger than in prior decades.  Mark Schneider of the American Institutes of Research (and a former Commissioner of the National Center for Education Statistics [NCES]) reached a different conclusion. He compared NAEP gains from 1996-2003 to 2003-2009 and declared NCLB’s impact disappointing.  “The pre-NCLB gains were greater than the post-NCLB gains.”[iii]  It is important to highlight that Schneider used the 2003 NAEP scores as the starting point for assessing NCLB.  A report from FairTest on the tenth anniversary of NCLB used the same demarcation for pre- and post-NCLB time frames.[iv]  FairTest is an advocacy group critical of high stakes testing—and harshly critical of NCLB—but if the 2003 starting point for NAEP is accepted, its conclusion is indisputable, “NAEP score improvement slowed or stopped in both reading and math after NCLB was implemented.” 

Choosing 2003 as NCLB’s starting date is intuitively appealing.  The law was introduced, debated, and passed by Congress in 2001.  President Bush signed NCLB into law on January 8, 2002.  It takes time to implement any law.  The 2003 NAEP is arguably the first chance that the assessment had to register NCLB’s effects. 

Selecting 2003 is consequential, however.  Some of the largest gains in NAEP’s history were registered between 2000 and 2003.  Once 2003 is established as a starting point (or baseline), pre-2003 gains become “pre-NCLB.”  But what if the 2003 NAEP scores were influenced by NCLB? Experiments evaluating the effects of new drugs collect baseline data from subjects before treatment, not after the treatment has begun.   Similarly, evaluating the effects of public policies require that baseline data are not influenced by the policies under evaluation.   

Avoiding such problems is particularly difficult when state or local policies are adopted nationally.  The federal effort to establish a speed limit of 55 miles per hour in the 1970s is a good example.  Several states already had speed limits of 55 mph or lower prior to the federal law’s enactment.  Moreover, a few states lowered speed limits in anticipation of the federal limit while the bill was debated in Congress.  On the day President Nixon signed the bill into law—January 2, 1974—the Associated Press reported that only 29 states would be required to lower speed limits.  Evaluating the effects of the 1974 law with national data but neglecting to adjust for what states were already doing would obviously yield tainted baseline data.

There are comparable reasons for questioning 2003 as a good baseline for evaluating NCLB’s effects.  The key components of NCLB’s accountability provisions—testing students, publicizing the results, and holding schools accountable for results—were already in place in nearly half the states.  In some states they had been in place for several years.  The 1999 iteration of Quality Counts, Education Week’s annual report on state-level efforts to improve public education, entitled Rewarding Results, Punishing Failure, was devoted to state accountability systems and the assessments underpinning them. Testing and accountability are especially important because they have drawn fire from critics of NCLB, a law that wasn’t passed until years later.

The Congressional debate of NCLB legislation took all of 2001, allowing states to pass anticipatory policies.  Derek Neal and Diane Whitmore Schanzenbach reported that “with the passage of NCLB lurking on the horizon,” Illinois placed hundreds of schools on a watch list and declared that future state testing would be high stakes.[v] In the summer and fall of 2002, with NCLB now the law of the land, state after state released lists of schools falling short of NCLB’s requirements.  Then the 2002-2003 school year began, during which the 2003 NAEP was administered.  Using 2003 as a NAEP baseline assumes that none of these activities—previous accountability systems, public lists of schools in need of improvement, anticipatory policy shifts—influenced achievement.  That is unlikely.[vi]

The Analysis

Unlike NCLB, there was no “pre-CCSS” state version of Common Core.  States vary in how quickly and aggressively they have implemented CCSS.  For the BCR analyses, two indexes were constructed to model CCSS implementation.  They are based on surveys of state education agencies and named for the two years that the surveys were conducted.  The 2011 survey reported the number of programs (e.g., professional development, new materials) on which states reported spending federal funds to implement CCSS.  Strong implementers spent money on more activities.  The 2011 index was used to investigate eighth grade math achievement in the 2014 BCR.  A new implementation index was created for this year’s study of reading achievement.  The 2013 index is based on a survey asking states when they planned to complete full implementation of CCSS in classrooms.  Strong states aimed for full implementation by 2012-2013 or earlier.      

Fourth grade NAEP reading scores serve as the achievement measure.  Why fourth grade and not eighth?  Reading instruction is a key activity of elementary classrooms but by eighth grade has all but disappeared.  What remains of “reading” as an independent subject, which has typically morphed into the study of literature, is subsumed under the English-Language Arts curriculum, a catchall term that also includes writing, vocabulary, listening, and public speaking.  Most students in fourth grade are in self-contained classes; they receive instruction in all subjects from one teacher.  The impact of CCSS on reading instruction—the recommendation that non-fiction take a larger role in reading materials is a good example—will be concentrated in the activities of a single teacher in elementary schools. The burden for meeting CCSS’s press for non-fiction, on the other hand, is expected to be shared by all middle and high school teachers.[vii] 

Results

Table 2-1 displays NAEP gains using the 2011 implementation index.  The four year period between 2009 and 2013 is broken down into two parts: 2009-2011 and 2011-2013.  Nineteen states are categorized as “strong” implementers of CCSS on the 2011 index, and from 2009-2013, they outscored the four states that did not adopt CCSS by a little more than one scale score point (0.87 vs. -0.24 for a 1.11 difference).  The non-adopters are the logical control group for CCSS, but with only four states in that category—Alaska, Nebraska, Texas, and Virginia—it is sensitive to big changes in one or two states.  Alaska and Texas both experienced a decline in fourth grade reading scores from 2009-2013.

The 1.11 point advantage in reading gains for strong CCSS implementers is similar to the 1.27 point advantage reported last year for eighth grade math.  Both are small.  The reading difference in favor of CCSS is equal to approximately 0.03 standard deviations of the 2009 baseline reading score.  Also note that the differences were greater in 2009-2011 than in 2011-2013 and that the “medium” implementers performed as well as or better than the strong implementers over the entire four year period (gain of 0.99).

Table 2-2 displays calculations using the 2013 implementation index.  Twelve states are rated as strong CCSS implementers, seven fewer than on the 2011 index.[viii]  Data for the non-adopters are the same as in the previous table.  In 2009-2013, the strong implementers gained 1.27 NAEP points compared to -0.24 among the non-adopters, a difference of 1.51 points.  The thirty-four states rated as medium implementers gained 0.82.  The strong implementers on this index are states that reported full implementation of CCSS-ELA by 2013.  Their larger gain in 2011-2013 (1.08 points) distinguishes them from the strong implementers in the previous table.  The overall advantage of 1.51 points over non-adopters represents about 0.04 standard deviations of the 2009 NAEP reading score, not a difference with real world significance.  Taken together, the 2011 and 2013 indexes estimate that NAEP reading gains from 2009-2013 were one to one and one-half scale score points larger in the strong CCSS implementation states compared to the states that did not adopt CCSS.

Common Core and Reading Content

As noted above, the 2013 implementation index is based on when states scheduled full implementation of CCSS in classrooms.  Other than reading achievement, does the index seem to reflect changes in any other classroom variable believed to be related to CCSS implementation?  If the answer is “yes,” that would bolster confidence that the index is measuring changes related to CCSS implementation. 

Let’s examine the types of literature that students encounter during instruction.  Perhaps the most controversial recommendation in the CCSS-ELA standards is the call for teachers to shift the content of reading materials away from stories and other fictional forms of literature in favor of more non-fiction.  NAEP asks fourth grade teachers the extent to which they teach fiction and non-fiction over the course of the school year (see Figure 2-1). 

Historically, fiction dominates fourth grade reading instruction.  It still does.  The percentage of teachers reporting that they teach fiction to a “large extent” exceeded the percentage answering “large extent” for non-fiction by 23 points in 2009 and 25 points in 2011.  In 2013, the difference narrowed to only 15 percentage points, primarily because of non-fiction’s increased use.  Fiction still dominated in 2013, but not by as much as in 2009.

The differences reported in Table 2-3 are national indicators of fiction’s declining prominence in fourth grade reading instruction.  What about the states?  We know that they were involved to varying degrees with the implementation of Common Core from 2009-2013.  Is there evidence that fiction’s prominence was more likely to weaken in states most aggressively pursuing CCSS implementation? 

Table 2-3 displays the data tackling that question.  Fourth grade teachers in strong implementation states decisively favored the use of fiction over non-fiction in 2009 and 2011.  But the prominence of fiction in those states experienced a large decline in 2013 (-12.4 percentage points).  The decline for the entire four year period, 2009-2013, was larger in the strong implementation states (-10.8) than in the medium implementation (-7.5) or non-adoption states (-9.8).  

Conclusion

This section of the Brown Center Report analyzed NAEP data and two indexes of CCSS implementation, one based on data collected in 2011, the second from data collected in 2013.  NAEP scores for 2009-2013 were examined.  Fourth grade reading scores improved by 1.11 scale score points in states with strong implementation of CCSS compared to states that did not adopt CCSS.  A similar comparison in last year’s BCR found a 1.27 point difference on NAEP’s eighth grade math test, also in favor of states with strong implementation of CCSS.  These differences, although certainly encouraging to CCSS supporters, are quite small, amounting to (at most) 0.04 standard deviations (SD) on the NAEP scale.  A threshold of 0.20 SD—five times larger—is often invoked as the minimum size for a test score change to be regarded as noticeable.  The current study’s findings are also merely statistical associations and cannot be used to make causal claims.  Perhaps other factors are driving test score changes, unmeasured by NAEP or the other sources of data analyzed here. 

The analysis also found that fourth grade teachers in strong implementation states are more likely to be shifting reading instruction from fiction to non-fiction texts.  That trend should be monitored closely to see if it continues.  Other events to keep an eye on as the Common Core unfolds include the following:

1.  The 2015 NAEP scores, typically released in the late fall, will be important for the Common Core.  In most states, the first CCSS-aligned state tests will be given in the spring of 2015.  Based on the earlier experiences of Kentucky and New York, results are expected to be disappointing.  Common Core supporters can respond by explaining that assessments given for the first time often produce disappointing results.  They will also claim that the tests are more rigorous than previous state assessments.  But it will be difficult to explain stagnant or falling NAEP scores in an era when implementing CCSS commands so much attention.   

2.  Assessment will become an important implementation variable in 2015 and subsequent years.  For analysts, the strategy employed here, modeling different indicators based on information collected at different stages of implementation, should become even more useful.  Some states are planning to use Smarter Balanced Assessments, others are using the Partnership for Assessment of Readiness for College and Careers (PARCC), and still others are using their own homegrown tests.   To capture variation among the states on this important dimension of implementation, analysts will need to use indicators that are up-to-date.

3.  The politics of Common Core injects a dynamic element into implementation.  The status of implementation is constantly changing.  States may choose to suspend, to delay, or to abandon CCSS.  That will require analysts to regularly re-configure which states are considered “in” Common Core and which states are “out.”  To further complicate matters, states may be “in” some years and “out” in others.

A final word.  When the 2014 BCR was released, many CCSS supporters commented that it is too early to tell the effects of Common Core.  The point that states may need more time operating under CCSS to realize its full effects certainly has merit.  But that does not discount everything states have done so far—including professional development, purchasing new textbooks and other instructional materials, designing new assessments, buying and installing computer systems, and conducting hearings and public outreach—as part of implementing the standards.  Some states are in their fifth year of implementation.  It could be that states need more time, but innovations can also produce their biggest “pop” earlier in implementation rather than later.  Kentucky was one of the earliest states to adopt and implement CCSS.  That state’s NAEP fourth grade reading score declined in both 2009-2011 and 2011-2013.  The optimism of CCSS supporters is understandable, but a one and a half point NAEP gain might be as good as it gets for CCSS.



[i] These ideas were first introduced in a 2013 Brown Center Chalkboard post I authored, entitled, “When Does a Policy Start?”

[ii] Maria Glod, “Since NCLB, Math and Reading Scores Rise for Ages 9 and 13,” Washington Post, April 29, 2009.

[iii] Mark Schneider, “NAEP Math Results Hold Bad News for NCLB,” AEIdeas (Washington, D.C.: American Enterprise Institute, 2009).

[iv] Lisa Guisbond with Monty Neill and Bob Schaeffer, NCLB’s Lost Decade for Educational Progress: What Can We Learn from this Policy Failure? (Jamaica Plain, MA: FairTest, 2012).

[v] Derek Neal and Diane Schanzenbach, “Left Behind by Design: Proficiency Counts and Test-Based Accountability,” NBER Working Paper No. W13293 (Cambridge: National Bureau of Economic Research, 2007), 13.

[vi] Careful analysts of NCLB have allowed different states to have different starting dates: see Thomas Dee and Brian A. Jacob, “Evaluating NCLB,” Education Next 10, no. 3 (Summer 2010); Manyee Wong, Thomas D. Cook, and Peter M. Steiner, “No Child Left Behind: An Interim Evaluation of Its Effects on Learning Using Two Interrupted Time Series Each with Its Own Non-Equivalent Comparison Series,” Working Paper 09-11 (Evanston, IL: Northwestern University Institute for Policy Research, 2009).

[vii] Common Core State Standards Initiative. “English Language Arts Standards, Key Design Consideration.” Retrieved from: http://www.corestandards.org/ELA-Literacy/introduction/key-design-consideration/

[viii] Twelve states shifted downward from strong to medium and five states shifted upward from medium to strong, netting out to a seven state swing.

« Part I: Girls, boys, and reading Part III: Student Engagement »

Downloads

Authors

     
 
 




mo

Common Core and classroom instruction: The good, the bad, and the ugly


This post continues a series begun in 2014 on implementing the Common Core State Standards (CCSS).  The first installment introduced an analytical scheme investigating CCSS implementation along four dimensions:  curriculum, instruction, assessment, and accountability.  Three posts focused on curriculum.  This post turns to instruction.  Although the impact of CCSS on how teachers teach is discussed, the post is also concerned with the inverse relationship, how decisions that teachers make about instruction shape the implementation of CCSS.

A couple of points before we get started.  The previous posts on curriculum led readers from the upper levels of the educational system—federal and state policies—down to curricular decisions made “in the trenches”—in districts, schools, and classrooms.  Standards emanate from the top of the system and are produced by politicians, policymakers, and experts.  Curricular decisions are shared across education’s systemic levels.  Instruction, on the other hand, is dominated by practitioners.  The daily decisions that teachers make about how to teach under CCSS—and not the idealizations of instruction embraced by upper-level authorities—will ultimately determine what “CCSS instruction” really means.

I ended the last post on CCSS by describing how curriculum and instruction can be so closely intertwined that the boundary between them is blurred.  Sometimes stating a precise curricular objective dictates, or at least constrains, the range of instructional strategies that teachers may consider.  That post focused on English-Language Arts.  The current post focuses on mathematics in the elementary grades and describes examples of how CCSS will shape math instruction.  As a former elementary school teacher, I offer my own personal opinion on these effects.

The Good

Certain aspects of the Common Core, when implemented, are likely to have a positive impact on the instruction of mathematics. For example, Common Core stresses that students recognize fractions as numbers on a number line.  The emphasis begins in third grade:

CCSS.MATH.CONTENT.3.NF.A.2
Understand a fraction as a number on the number line; represent fractions on a number line diagram.

CCSS.MATH.CONTENT.3.NF.A.2.A
Represent a fraction 1/b on a number line diagram by defining the interval from 0 to 1 as the whole and partitioning it into b equal parts. Recognize that each part has size 1/b and that the endpoint of the part based at 0 locates the number 1/b on the number line.

CCSS.MATH.CONTENT.3.NF.A.2.B
Represent a fraction a/b on a number line diagram by marking off a lengths 1/b from 0. Recognize that the resulting interval has size a/b and that its endpoint locates the number a/b on the number line.


When I first read this section of the Common Core standards, I stood up and cheered.  Berkeley mathematician Hung-Hsi Wu has been working with teachers for years to get them to understand the importance of using number lines in teaching fractions.[1] American textbooks rely heavily on part-whole representations to introduce fractions.  Typically, students see pizzas and apples and other objects—typically other foods or money—that are divided up into equal parts.  Such models are limited.  They work okay with simple addition and subtraction.  Common denominators present a bit of a challenge, but ½ pizza can be shown to be also 2/4, a half dollar equal to two quarters, and so on. 

With multiplication and division, all the little tricks students learned with whole number arithmetic suddenly go haywire.  Students are accustomed to the fact that multiplying two whole numbers yields a product that is larger than either number being multiplied: 4 X 5 = 20 and 20 is larger than both 4 and 5.[2]  How in the world can ¼ X 1/5 = 1/20, a number much smaller than either 1/4or 1/5?  The part-whole representation has convinced many students that fractions are not numbers.  Instead, they are seen as strange expressions comprising two numbers with a small horizontal bar separating them. 

I taught sixth grade but occasionally visited my colleagues’ classes in the lower grades.  I recall one exchange with second or third graders that went something like this:

“Give me a number between seven and nine.”  Giggles. 

“Eight!” they shouted. 

“Give me a number between two and three.”  Giggles.

“There isn’t one!” they shouted. 

“Really?” I’d ask and draw a number line.  After spending some time placing whole numbers on the number line, I’d observe,  “There’s a lot of space between two and three.  Is it just empty?” 

Silence.  Puzzled little faces.  Then a quiet voice.  “Two and a half?”

You have no idea how many children do not make the transition to understanding fractions as numbers and because of stumbling at this crucial stage, spend the rest of their careers as students of mathematics convinced that fractions are an impenetrable mystery.   And  that’s not true of just students.  California adopted a test for teachers in the 1980s, the California Basic Educational Skills Test (CBEST).  Beginning in 1982, even teachers already in the classroom had to pass it.   I made a nice after-school and summer income tutoring colleagues who didn’t know fractions from Fermat’s Last Theorem.  To be fair, primary teachers, teaching kindergarten or grades 1-2, would not teach fractions as part of their math curriculum and probably hadn’t worked with a fraction in decades.  So they are no different than non-literary types who think Hamlet is just a play about a young guy who can’t make up his mind, has a weird relationship with his mother, and winds up dying at the end.

Division is the most difficult operation to grasp for those arrested at the part-whole stage of understanding fractions.  A problem that Liping Ma posed to teachers is now legendary.[3]

She asked small groups of American and Chinese elementary teachers to divide 1 ¾ by ½ and to create a word problem that illustrates the calculation.  All 72 Chinese teachers gave the correct answer and 65 developed an appropriate word problem.  Only nine of the 23 American teachers solved the problem correctly.  A single American teacher was able to devise an appropriate word problem.  Granted, the American sample was not selected to be representative of American teachers as a whole, but the stark findings of the exercise did not shock anyone who has worked closely with elementary teachers in the U.S.  They are often weak at math.  Many of the teachers in Ma’s study had vague ideas of an “invert and multiply” rule but lacked a conceptual understanding of why it worked.

A linguistic convention exacerbates the difficulty.  Students may cling to the mistaken notion that “dividing in half” means “dividing by one-half.”  It does not.  Dividing in half means dividing by two.  The number line can help clear up such confusion.  Consider a basic, whole-number division problem for which third graders will already know the answer:  8 divided by 2 equals 4.   It is evident that a segment 8 units in length (measured from 0 to 8) is divided by a segment 2 units in length (measured from 0 to 2) exactly 4 times.  Modeling 12 divided by 2 and other basic facts with 2 as a divisor will convince students that whole number division works quite well on a number line. 

Now consider the number ½ as a divisor.  It will become clear to students that 8 divided by ½ equals 16, and they can illustrate that fact on a number line by showing how a segment ½ units in length divides a segment 8 units in length exactly 16 times; it divides a segment 12 units in length 24 times; and so on.  Students will be relieved to discover that on a number line division with fractions works the same as division with whole numbers.

Now, let’s return to Liping Ma’s problem: 1 ¾ divided by ½.   This problem would not be presented in third grade, but it might be in fifth or sixth grades.  Students who have been working with fractions on a number line for two or three years will have little trouble solving it.  They will see that the problem simply asks them to divide a line segment of 1 3/4 units by a segment of ½ units.  The answer is 3 ½ .  Some students might estimate that the solution is between 3 and 4 because 1 ¾ lies between 1 ½ and 2, which on the number line are the points at which the ½ unit segment, laid end on end, falls exactly three and four times.  Other students will have learned about reciprocals and that multiplication and division are inverse operations.  They will immediately grasp that dividing by ½ is the same as multiplying by 2—and since 1 ¾ x 2 = 3 ½, that is the answer.  Creating a word problem involving string or rope or some other linearly measured object is also surely within their grasp.

Conclusion

I applaud the CCSS for introducing number lines and fractions in third grade.  I believe it will instill in children an important idea: fractions are numbers.  That foundational understanding will aid them as they work with more abstract representations of fractions in later grades.   Fractions are a monumental barrier for kids who struggle with math, so the significance of this contribution should not be underestimated.

I mentioned above that instruction and curriculum are often intertwined.  I began this series of posts by defining curriculum as the “stuff” of learning—the content of what is taught in school, especially as embodied in the materials used in instruction.  Instruction refers to the “how” of teaching—how teachers organize, present, and explain those materials.  It’s each teacher’s repertoire of instructional strategies and techniques that differentiates one teacher from another even as they teach the same content.  Choosing to use a number line to teach fractions is obviously an instructional decision, but it also involves curriculum.  The number line is mathematical content, not just a teaching tool.

Guiding third grade teachers towards using a number line does not guarantee effective instruction.  In fact, it is reasonable to expect variation in how teachers will implement the CCSS standards listed above.  A small body of research exists to guide practice. One of the best resources for teachers to consult is a practice guide published by the What Works Clearinghouse: Developing Effective Fractions Instruction for Kindergarten Through Eighth Grade (see full disclosure below).[4]  The guide recommends the use of number lines as its second recommendation, but it also states that the evidence supporting the effectiveness of number lines in teaching fractions is inferred from studies involving whole numbers and decimals.  We need much more research on how and when number lines should be used in teaching fractions.

Professor Wu states the following, “The shift of emphasis from models of a fraction in the initial stage to an almost exclusive model of a fraction as a point on the number line can be done gradually and gracefully beginning somewhere in grade four. This shift is implicit in the Common Core Standards.”[5]  I agree, but the shift is also subtle.  CCSS standards include the use of other representations—fraction strips, fraction bars, rectangles (which are excellent for showing multiplication of two fractions) and other graphical means of modeling fractions.  Some teachers will manage the shift to number lines adroitly—and others will not.  As a consequence, the quality of implementation will vary from classroom to classroom based on the instructional decisions that teachers make.  

The current post has focused on what I believe to be a positive aspect of CCSS based on the implementation of the standards through instruction.  Future posts in the series—covering the “bad” and the “ugly”—will describe aspects of instruction on which I am less optimistic.



[1] See H. Wu (2014). “Teaching Fractions According to the Common Core Standards,” https://math.berkeley.edu/~wu/CCSS-Fractions_1.pdf. Also see "What's Sophisticated about Elementary Mathematics?" http://www.aft.org/sites/default/files/periodicals/wu_0.pdf

[2] Students learn that 0 and 1 are exceptions and have their own special rules in multiplication.

[3] Liping Ma, Knowing and Teaching Elementary Mathematics.

[4] The practice guide can be found at: http://ies.ed.gov/ncee/wwc/pdf/practice_guides/fractions_pg_093010.pdf I serve as a content expert in elementary mathematics for the What Works Clearinghouse.  I had nothing to do, however, with the publication cited.

[5] Wu, page 3.

Authors

     
 
 




mo

Implementing Common Core: The problem of instructional time


This is part two of my analysis of instruction and Common Core’s implementation.  I dubbed the three-part examination of instruction “The Good, The Bad, and the Ugly.”  Having discussed “the “good” in part one, I now turn to “the bad.”  One particular aspect of the Common Core math standards—the treatment of standard algorithms in whole number arithmetic—will lead some teachers to waste instructional time.

A Model of Time and Learning

In 1963, psychologist John B. Carroll published a short essay, “A Model of School Learning” in Teachers College Record.  Carroll proposed a parsimonious model of learning that expressed the degree of learning (or what today is commonly called achievement) as a function of the ratio of time spent on learning to the time needed to learn.     

The numerator, time spent learning, has also been given the term opportunity to learn.  The denominator, time needed to learn, is synonymous with student aptitude.  By expressing aptitude as time needed to learn, Carroll refreshingly broke through his era’s debate about the origins of intelligence (nature vs. nurture) and the vocabulary that labels students as having more or less intelligence. He also spoke directly to a primary challenge of teaching: how to effectively produce learning in classrooms populated by students needing vastly different amounts of time to learn the exact same content.[i] 

The source of that variation is largely irrelevant to the constraints placed on instructional decisions.  Teachers obviously have limited control over the denominator of the ratio (they must take kids as they are) and less than one might think over the numerator.  Teachers allot time to instruction only after educational authorities have decided the number of hours in the school day, the number of days in the school year, the number of minutes in class periods in middle and high schools, and the amount of time set aside for lunch, recess, passing periods, various pull-out programs, pep rallies, and the like.  There are also announcements over the PA system, stray dogs that may wander into the classroom, and other unscheduled encroachments on instructional time.

The model has had a profound influence on educational thought.  As of July 5, 2015, Google Scholar reported 2,931 citations of Carroll’s article.  Benjamin Bloom’s “mastery learning” was deeply influenced by Carroll.  It is predicated on the idea that optimal learning occurs when time spent on learning—rather than content—is allowed to vary, providing to each student the individual amount of time he or she needs to learn a common curriculum.  This is often referred to as “students working at their own pace,” and progress is measured by mastery of content rather than seat time. David C. Berliner’s 1990 discussion of time includes an analysis of mediating variables in the numerator of Carroll’s model, including the amount of time students are willing to spend on learning.  Carroll called this persistence, and Berliner links the construct to student engagement and time on task—topics of keen interest to researchers today.  Berliner notes that although both are typically described in terms of motivation, they can be measured empirically in increments of time.     

Most applications of Carroll’s model have been interested in what happens when insufficient time is provided for learning—in other words, when the numerator of the ratio is significantly less than the denominator.  When that happens, students don’t have an adequate opportunity to learn.  They need more time. 

As applied to Common Core and instruction, one should also be aware of problems that arise from the inefficient distribution of time.  Time is a limited resource that teachers deploy in the production of learning.  Below I discuss instances when the CCSS-M may lead to the numerator in Carroll’s model being significantly larger than the denominator—when teachers spend more time teaching a concept or skill than is necessary.  Because time is limited and fixed, wasted time on one topic will shorten the amount of time available to teach other topics.  Excessive instructional time may also negatively affect student engagement.  Students who have fully learned content that continues to be taught may become bored; they must endure instruction that they do not need.

Standard Algorithms and Alternative Strategies

Jason Zimba, one of the lead authors of the Common Core Math standards, and Barry Garelick, a critic of the standards, had a recent, interesting exchange about when standard algorithms are called for in the CCSS-M.  A standard algorithm is a series of steps designed to compute accurately and quickly.  In the U.S., students are typically taught the standard algorithms of addition, subtraction, multiplication, and division with whole numbers.  Most readers of this post will recognize the standard algorithm for addition.  It involves lining up two or more multi-digit numbers according to place-value, with one number written over the other, and adding the columns from right to left with “carrying” (or regrouping) as needed.

The standard algorithm is the only algorithm required for students to learn, although others are mentioned beginning with the first grade standards.  Curiously, though, CCSS-M doesn’t require students to know the standard algorithms for addition and subtraction until fourth grade.  This opens the door for a lot of wasted time.  Garelick questioned the wisdom of teaching several alternative strategies for addition.  He asked whether, under the Common Core, only the standard algorithm could be taught—or at least, could it be taught first. As he explains:

Delaying teaching of the standard algorithm until fourth grade and relying on place value “strategies” and drawings to add numbers is thought to provide students with the conceptual understanding of adding and subtracting multi-digit numbers. What happens, instead, is that the means to help learn, explain or memorize the procedure become a procedure unto itself and students are required to use inefficient cumbersome methods for two years. This is done in the belief that the alternative approaches confer understanding, so are superior to the standard algorithm. To teach the standard algorithm first would in reformers’ minds be rote learning. Reformers believe that by having students using strategies in lieu of the standard algorithm, students are still learning “skills” (albeit inefficient and confusing ones), and these skills support understanding of the standard algorithm. Students are left with a panoply of methods (praised as a good thing because students should have more than one way to solve problems), that confuse more than enlighten. 

 

Zimba responded that the standard algorithm could, indeed, be the only method taught because it meets a crucial test: reinforcing knowledge of place value and the properties of operations.  He goes on to say that other algorithms also may be taught that are consistent with the standards, but that the decision to do so is left in the hands of local educators and curriculum designers:

In short, the Common Core requires the standard algorithm; additional algorithms aren’t named, and they aren’t required…Standards can’t settle every disagreement—nor should they. As this discussion of just a single slice of the math curriculum illustrates, teachers and curriculum authors following the standards still may, and still must, make an enormous range of decisions.

 

Zimba defends delaying mastery of the standard algorithm until fourth grade, referring to it as a “culminating” standard that he would, if he were teaching, introduce in earlier grades.  Zimba illustrates the curricular progression he would employ in a table, showing that he would introduce the standard algorithm for addition late in first grade (with two-digit addends) and then extend the complexity of its use and provide practice towards fluency until reaching the culminating standard in fourth grade. Zimba would introduce the subtraction algorithm in second grade and similarly ramp up its complexity until fourth grade.

 

It is important to note that in CCSS-M the word “algorithm” appears for the first time (in plural form) in the third grade standards:

 

3.NBT.2  Fluently add and subtract within 1000 using strategies and algorithms based on place value, properties of operations, and/or the relationship between addition and subtraction.

 

The term “strategies and algorithms” is curious.  Zimba explains, “It is true that the word ‘algorithms’ here is plural, but that could be read as simply leaving more choice in the hands of the teacher about which algorithm(s) to teach—not as a requirement for each student to learn two or more general algorithms for each operation!” 

 

I have described before the “dog whistles” embedded in the Common Core, signals to educational progressives—in this case, math reformers—that  despite these being standards, the CCSS-M will allow them great latitude.  Using the plural “algorithms” in this third grade standard and not specifying the standard algorithm until fourth grade is a perfect example of such a dog whistle.

 

Why All the Fuss about Standard Algorithms?

It appears that the Common Core authors wanted to reach a political compromise on standard algorithms. 

 

Standard algorithms were a key point of contention in the “Math Wars” of the 1990s.   The 1997 California Framework for Mathematics required that students know the standard algorithms for all four operations—addition, subtraction, multiplication, and division—by the end of fourth grade.[ii]  The 2000 Massachusetts Mathematics Curriculum Framework called for learning the standard algorithms for addition and subtraction by the end of second grade and for multiplication and division by the end of fourth grade.  These two frameworks were heavily influenced by mathematicians (from Stanford in California and Harvard in Massachusetts) and quickly became favorites of math traditionalists.  In both states’ frameworks, the standard algorithm requirements were in direct opposition to the reform-oriented frameworks that preceded them—in which standard algorithms were barely mentioned and alternative algorithms or “strategies” were encouraged. 

 

Now that the CCSS-M has replaced these two frameworks, the requirement for knowing the standard algorithms in California and Massachusetts slips from third or fourth grade all the way to sixth grade.  That’s what reformers get in the compromise.  They are given a green light to continue teaching alternative algorithms, as long as the algorithms are consistent with teaching place value and properties of arithmetic.  But the standard algorithm is the only one students are required to learn.  And that exclusivity is intended to please the traditionalists.

 

I agree with Garelick that the compromise leads to problems.  In a 2013 Chalkboard post, I described a first grade math program in which parents were explicitly requested not to teach the standard algorithm for addition when helping their children at home.  The students were being taught how to represent addition with drawings that clustered objects into groups of ten.  The exercises were both time consuming and tedious.  When the parents met with the school principal to discuss the matter, the principal told them that the math program was following the Common Core by promoting deeper learning.  The parents withdrew their child from the school and enrolled him in private school.

 

The value of standard algorithms is that they are efficient and packed with mathematics.  Once students have mastered single-digit operations and the meaning of place value, the standard algorithms reveal to students that they can take procedures that they already know work well with one- and two-digit numbers, and by applying them over and over again, solve problems with large numbers.  Traditionalists and reformers have different goals.  Reformers believe exposure to several algorithms encourages flexible thinking and the ability to draw on multiple strategies for solving problems.  Traditionalists believe that a bigger problem than students learning too few algorithms is that too few students learn even one algorithm.

 

I have been a critic of the math reform movement since I taught in the 1980s.  But some of their complaints have merit.  All too often, instruction on standard algorithms has left out meaning.  As Karen C. Fuson and Sybilla Beckmann point out, “an unfortunate dichotomy” emerged in math instruction: teachers taught “strategies” that implied understanding and “algorithms” that implied procedural steps that were to be memorized.  Michael Battista’s research has provided many instances of students clinging to algorithms without understanding.  He gives an example of a student who has not quite mastered the standard algorithm for addition and makes numerous errors on a worksheet.  On one item, for example, the student forgets to carry and calculates that 19 + 6 = 15.  In a post-worksheet interview, the student counts 6 units from 19 and arrives at 25.  Despite the obvious discrepancy—(25 is not 15, the student agrees)—he declares that his answers on the worksheet must be correct because the algorithm he used “always works.”[iii] 

 

Math reformers rightfully argue that blind faith in procedure has no place in a thinking mathematical classroom. Who can disagree with that?  Students should be able to evaluate the validity of answers, regardless of the procedures used, and propose alternative solutions.  Standard algorithms are tools to help them do that, but students must be able to apply them, not in a robotic way, but with understanding.

 

Conclusion

Let’s return to Carroll’s model of time and learning.  I conclude by making two points—one about curriculum and instruction, the other about implementation.

In the study of numbers, a coherent K-12 math curriculum, similar to that of the previous California and Massachusetts frameworks, can be sketched in a few short sentences.  Addition with whole numbers (including the standard algorithm) is taught in first grade, subtraction in second grade, multiplication in third grade, and division in fourth grade.  Thus, the study of whole number arithmetic is completed by the end of fourth grade.  Grades five through seven focus on rational numbers (fractions, decimals, percentages), and grades eight through twelve study advanced mathematics.  Proficiency is sought along three dimensions:  1) fluency with calculations, 2) conceptual understanding, 3) ability to solve problems.

Placing the CCSS-M standard for knowing the standard algorithms of addition and subtraction in fourth grade delays this progression by two years.  Placing the standard for the division algorithm in sixth grade continues the two-year delay.   For many fourth graders, time spent working on addition and subtraction will be wasted time.  They already have a firm understanding of addition and subtraction.  The same thing for many sixth graders—time devoted to the division algorithm will be wasted time that should be devoted to the study of rational numbers.  The numerator in Carroll’s instructional time model will be greater than the denominator, indicating the inefficient allocation of time to instruction.

As Jason Zimba points out, not everyone agrees on when the standard algorithms should be taught, the alternative algorithms that should be taught, the manner in which any algorithm should be taught, or the amount of instructional time that should be spent on computational procedures.  Such decisions are made by local educators.  Variation in these decisions will introduce variation in the implementation of the math standards.  It is true that standards, any standards, cannot control implementation, especially the twists and turns in how they are interpreted by educators and brought to life in classroom instruction.  But in this case, the standards themselves are responsible for the myriad approaches, many unproductive, that we are sure to see as schools teach various algorithms under the Common Core.


[i] Tracking, ability grouping, differentiated learning, programmed learning, individualized instruction, and personalized learning (including today’s flipped classrooms) are all attempts to solve the challenge of student heterogeneity.  

[ii] An earlier version of this post incorrectly stated that the California framework required that students know the standard algorithms for all four operations by the end of third grade. I regret the error.

[iii] Michael T. Battista (2001).  “Research and Reform in Mathematics Education,” pp. 32-84 in The Great Curriculum Debate: How Should We Teach Reading and Math? (T. Loveless, ed., Brookings Instiution Press).

Authors

     
 
 




mo

Has Common Core influenced instruction?


The release of 2015 NAEP scores showed national achievement stalling out or falling in reading and mathematics.  The poor results triggered speculation about the effect of Common Core State Standards (CCSS), the controversial set of standards adopted by more than 40 states since 2010.  Critics of Common Core tended to blame the standards for the disappointing scores.  Its defenders said it was too early to assess CCSS’s impact and that implementation would take many years to unfold. William J. Bushaw, executive director of the National assessment Governing Board, cited “curricular uncertainty” as the culprit.  Secretary of Education Arne Duncan argued that new standards typically experience an “implementation dip” in the early days of teachers actually trying to implement them in classrooms.

In the rush to argue whether CCSS has positively or negatively affected American education, these speculations are vague as to how the standards boosted or depressed learning.  They don’t provide a description of the mechanisms, the connective tissue, linking standards to learning.  Bushaw and Duncan come the closest, arguing that the newness of CCSS has created curriculum confusion, but the explanation falls flat for a couple of reasons.  Curriculum in the three states that adopted the standards, rescinded them, then adopted something else should be extremely confused.  But the 2013-2015 NAEP changes for Indiana, Oklahoma, and South Carolina were a little bit better than the national figures, not worse.[i]  In addition, surveys of math teachers conducted in the first year or two after the standards were adopted found that:  a) most teachers liked them, and b) most teachers said they were already teaching in a manner consistent with CCSS.[ii]  They didn’t mention uncertainty.  Recent polls, however, show those positive sentiments eroding. Mr. Bushaw might be mistaking disenchantment for uncertainty.[iii] 

For teachers, the novelty of CCSS should be dissipating.  Common Core’s advocates placed great faith in professional development to implement the standards.  Well, there’s been a lot of it.  Over the past few years, millions of teacher-hours have been devoted to CCSS training.  Whether all that activity had a lasting impact is questionable.  Randomized control trials have been conducted of two large-scale professional development programs.  Interestingly, although they pre-date CCSS, both programs attempted to promote the kind of “instructional shifts” championed by CCSS advocates. The studies found that if teacher behaviors change from such training—and that’s not a certainty—the changes fade after a year or two.  Indeed, that’s a pattern evident in many studies of educational change: a pop at the beginning, followed by fade out.  

My own work analyzing NAEP scores in 2011 and 2013 led me to conclude that the early implementation of CCSS was producing small, positive changes in NAEP.[iv]  I warned that those gains “may be as good as it gets” for CCSS.[v]  Advocates of the standards hope that CCSS will eventually produce long term positive effects as educators learn how to use them.  That’s a reasonable hypothesis.  But it should now be apparent that a counter-hypothesis has equal standing: any positive effect of adopting Common Core may have already occurred.  To be precise, the proposition is this: any effects from adopting new standards and attempting to change curriculum and instruction to conform to those standards occur early and are small in magnitude.   Policymakers still have a couple of arrows left in the implementation quiver, accountability being the most powerful.  Accountability systems have essentially been put on hold as NCLB sputtered to an end and new CCSS tests appeared on the scene.  So the CCSS story isn’t over.  Both hypotheses remain plausible. 

Reading Instruction in 4th and 8th Grades

Back to the mechanisms, the connective tissue binding standards to classrooms.  The 2015 Brown Center Report introduced one possible classroom effect that is showing up in NAEP data: the relative emphasis teachers place on fiction and nonfiction in reading instruction.  The ink was still drying on new Common Core textbooks when a heated debate broke out about CCSS’s recommendation that informational reading should receive greater attention in classrooms.[vi] 

Fiction has long dominated reading instruction.  That dominance appears to be waning.



After 2011, something seems to have happened.  I am more persuaded that Common Core influenced the recent shift towards nonfiction than I am that Common Core has significantly affected student achievement—for either good or ill.   But causality is difficult to confirm or to reject with NAEP data, and trustworthy efforts to do so require a more sophisticated analysis than presented here.

Four lessons from previous education reforms

Nevertheless, the figures above reinforce important lessons that have been learned from previous top-down reforms.  Let’s conclude with four:

1.  There seems to be evidence that CCSS is having an impact on the content of reading instruction, moving from the dominance of fiction over nonfiction to near parity in emphasis.  Unfortunately, as Mark Bauerlein and Sandra Stotsky have pointed out, there is scant evidence that such a shift improves children’s reading.[vii]

2.  Reading more nonfiction does not necessarily mean that students will be reading higher quality texts, even if the materials are aligned with CCSS.   The Core Knowledge Foundation and the Partnership for 21st Century Learning, both supporters of Common Core, have very different ideas on the texts schools should use with the CCSS.[viii] The two organizations advocate for curricula having almost nothing in common.

3.  When it comes to the study of implementing education reforms, analysts tend to focus on the formal channels of implementation and the standard tools of public administration—for example, intergovernmental hand-offs (federal to state to district to school), alignment of curriculum, assessment and other components of the reform, professional development, getting incentives right, and accountability mechanisms.  Analysts often ignore informal channels, and some of those avenues funnel directly into schools and classrooms.[ix]  Politics and the media are often overlooked.  Principals and teachers are aware of the politics swirling around K-12 school reform.  Many educators undoubtedly formed their own opinions on CCSS and the fiction vs. nonfiction debate before the standard managerial efforts touched them.

4.  Local educators whose jobs are related to curriculum almost certainly have ideas about what constitutes good curriculum.  It’s part of the profession.  Major top-down reforms such as CCSS provide local proponents with political cover to pursue curricular and instructional changes that may be politically unpopular in the local jurisdiction.  Anyone who believes nonfiction should have a more prominent role in the K-12 curriculum was handed a lever for promoting his or her beliefs by CCSS. I’ve previously called these the “dog whistles” of top-down curriculum reform, subtle signals that give local advocates license to promote unpopular positions on controversial issues.


[i] In the four subject-grade combinations assessed by NAEP (reading and math at 4th and 8th grades), IN, SC, and OK all exceeded national gains on at least three out of four tests from 2013-2015.  NAEP data can be analyzed using the NAEP Data Explorer: http://nces.ed.gov/nationsreportcard/naepdata/.

[ii] In a Michigan State survey of teachers conducted in 2011, 77 percent of teachers, after being presented with selected CCSS standards for their grade, thought they were the same as their state’s former standards.  http://education.msu.edu/epc/publications/documents/WP33ImplementingtheCommonCoreStandardsforMathematicsWhatWeknowaboutTeacherofMathematicsin41S.pdf

[iii] In the Education Next surveys, 76 percent of teachers supported Common Core in 2013 and 12 percent opposed.  In 2015, 40 percent supported and 50 percent opposed. http://educationnext.org/2015-ednext-poll-school-reform-opt-out-common-core-unions.

[iv] I used variation in state implementation of CCSS to assign the states to three groups and analyzed differences of the groups’ NAEP gains

[v] http://www.brookings.edu/~/media/research/files/reports/2015/03/bcr/2015-brown-center-report_final.pdf

[vi] http://www.edweek.org/ew/articles/2012/11/14/12cc-nonfiction.h32.html?qs=common+core+fiction

[vii] Mark Bauerlein and Sandra Stotsky (2012). “How Common Core’s ELA Standards Place College Readiness at Risk.” A Pioneer Institute White Paper.

[viii] Compare the P21 Common Core Toolkit (http://www.p21.org/our-work/resources/for-educators/1005-p21-common-core-toolkit) with Core Knowledge ELA Sequence (http://www.coreknowledge.org/ccss).  It is hard to believe that they are talking about the same standards in references to CCSS.

[ix] I elaborate on this point in Chapter 8, “The Fate of Reform,” in The Tracking Wars: State Reform Meets School Policy (Brookings Institution Press, 1999).


Authors

Image Source: © Patrick Fallon / Reuters
      
 
 




mo

Reading and math in the Common Core era


      
 
 




mo

Brookings Live: Reading and math in the Common Core era


Event Information

March 28, 2016
4:00 PM - 4:30 PM EDT

Online Only
Live Webcast

And more from the Brown Center Report on American Education


The Common Core State Standards have been adopted as the reading and math standards in more than forty states, but are the frontline implementers—teachers and principals—enacting them? As part of the 2016 Brown Center Report on American Education, Tom Loveless examines the degree to which CCSS recommendations have penetrated schools and classrooms. He specifically looks at the impact the standards have had on the emphasis of non-fiction vs. fiction texts in reading, and on enrollment in advanced courses in mathematics.

On March 28, the Brown Center hosted an online discussion of Loveless's findings, moderated by the Urban Institute's Matthew Chingos.  In addition to the Common Core, Loveless and Chingos also discussed the other sections of the three-part Brown Center Report, including a study of the relationship between ability group tracking in eighth grade and AP performance in high school.

Watch the archived video below.

Spreecast is the social video platform that connects people.
Check out Reading and Math in the Common Core Era on Spreecast.