core Core (Values) Workout By feedproxy.google.com Published On :: Thu, 05 Mar 2020 12:05:58 +0000 This blog was written by Jeb Keiper, CEO of Nimbus Therapeutics LLC, as part of the From The Trenches feature of LifeSciVC. Like many middle-aged weekend warriors, I’ve been recently sidelined by injury simply through doing what I’ve regularly done: The post Core (Values) Workout appeared first on LifeSciVC. Full Article Corporate Culture From The Trenches Leadership
core Yokogawa Releases ProSafe-RS R4.05.00, the Latest Version of a Core Product in the OpreX Control and Safety System Family By www.yokogawa.com Published On :: 2019-11-01T16:00:00+09:00 Yokogawa Electric Corporation (TOKYO: 6841) announces the November 15 release of ProSafe-RS R4.05.00, an enhanced version of the ProSafe-RS safety instrumented system. ProSafe-RS is a core product of the OpreX Control and Safety System family. Full Article
core RBI Assistant Scorecard Prelims 2020 link out @rbi.org.in: Download Here; Mains Exam Date Postponed By www.jagranjosh.com Published On :: 2020-03-18T04:49:00Z RBI Assistant Scorecard 2020 of prelims exam released @ rbi.org.in. Check your marks here on the direct link and download your marksheet. RBI Mains exam date postponed by Reserve Bank of India. The new exam date of RBI Assistant Mains 2020 will be revealed soon. Full Article
core Combining clinical and candidate gene data into a risk score for azathioprine-associated leukopenia in routine clinical practice By feeds.nature.com Published On :: 2020-02-14 Full Article
core The Dark Core of Personality By feeds.nature.com Published On :: 2018-10-11 Nine factors can determine how malevolent you are Full Article
core Clinical utility of the exosome based ExoDx Prostate(<i>IntelliScore</i>) EPI test in men presenting for initial Biopsy with a PSA 2–10 ng/mL By feeds.nature.com Published On :: 2020-05-07 Full Article
core High rate of durable remissions post autologous stem cell transplantation for core-binding factor acute myeloid leukaemia in second complete remission By feeds.nature.com Published On :: 2020-05-06 Full Article
core Trans-Atlantic Scorecard – January 2020 By webfeeds.brookings.edu Published On :: Welcome to the sixth edition of the Trans-Atlantic Scorecard, a quarterly evaluation of U.S.-European relations produced by Brookings’s Center on the United States and Europe (CUSE), as part of the Brookings – Robert Bosch Foundation Transatlantic Initiative. To produce the Scorecard, we poll Brookings scholars and other experts on the present state of U.S. relations… Full Article
core Trans-Atlantic Scorecard – January 2020 By webfeeds.brookings.edu Published On :: Welcome to the sixth edition of the Trans-Atlantic Scorecard, a quarterly evaluation of U.S.-European relations produced by Brookings’s Center on the United States and Europe (CUSE), as part of the Brookings – Robert Bosch Foundation Transatlantic Initiative. To produce the Scorecard, we poll Brookings scholars and other experts on the present state of U.S. relations… Full Article
core Restoring Prosperity: The State Role in Revitalizing Ohio’s Core Communities By webfeeds.brookings.edu Published On :: Wed, 10 Sep 2008 07:30:00 -0400 Event Information September 10, 20087:30 AM - 4:30 PM EDTColumbus Convention Center400 North StreetColumbus, OH 46085 The 2008 Ohio Summit – Restoring Our Prosperity: The State Role in Revitalizing Ohio’s Core Communities convened more than 1000 government, corporate, civic, neighborhood and academic leaders from around the state, including Governor Ted Strickland, Lieutenant Governor Lee Fisher, Senate President Bill Harris and Speaker of the House Jon Husted confirmed as speakers. The Summit was co-convened by the Metropolitan Policy Program at Brookings and GreaterOhio. The purpose of The Summit was to elicit reaction to a draft set of proposals for state policy reforms that reflect a critique of past policies, aimed at revitalizing communities throughout Ohio. Each of the recommendations was carefully tailored to the unique assets and challenges of Ohio’s 32 core communities whose revitalization is the springboard to a more prosperous and competitive state as a whole. Comments derived from this gathering will help to shape the final report to be released in early 2009.Comment here » Event Presentations: Bruce Katz Vice President and Director, Metropolitan Policy Program Download multimedia presentation slides Download written remarks Scott Bernstein President, Center for Neighborhood Technology Download presentation slides Rob Greenbaum Professor, John Glenn School of Public Affairs, Ohio State University Download presentation slides Mark Partridge Swank Chair in Rural-Urban Policy, The Ohio State University Download presentation slides Jane Dockery Associate Director, Center for Urban and Public Affairs, Wright State University Download presentation slides Alan Mallach Nonresident Senior Fellow, Brookings Institution Download presentation slides Event Resources: Welcome Letter Summit Agenda Sponsor List Biographies of the Speakers Executive Summary- Restoring Prosperity: The State Role in Revitalizing Ohio’s Core Communities A Restoring Prosperity Case Study: Akron, Ohio Working Draft: Restoring Prosperity: The State Role in Revitalizing Ohio’s Core Communities Working Draft: Our Joint Future: Rural-Urban Interdependence in 21st Century Ohio Restoring Prosperity to Ohio: Fact Sheet Ohio Summit in the News Lavea Brachman and The Honorable Michael Coleman The audience at Restoring Prosperity The Honorable Ted Strickland Douglas Kridler, The Honorable JonHusted, Nancy Zimpher, Al Ratner,The Honorable David Burger Video The Honorable Michael ColemanLavea BrachmanBruce KatzThe Honorable Ted Strickland Full Article
core Trans-Atlantic Scorecard – April 2020 By webfeeds.brookings.edu Published On :: Thu, 23 Apr 2020 15:12:26 +0000 Welcome to the seventh edition of the Trans-Atlantic Scorecard, a quarterly evaluation of U.S.-European relations produced by Brookings’s Center on the United States and Europe (CUSE), as part of the Brookings – Robert Bosch Foundation Transatlantic Initiative. To produce the Scorecard, we poll Brookings scholars and other experts on the present state of U.S. relations… Full Article
core Trans-Atlantic Scorecard – April 2020 By webfeeds.brookings.edu Published On :: Thu, 23 Apr 2020 15:12:26 +0000 Welcome to the seventh edition of the Trans-Atlantic Scorecard, a quarterly evaluation of U.S.-European relations produced by Brookings’s Center on the United States and Europe (CUSE), as part of the Brookings – Robert Bosch Foundation Transatlantic Initiative. To produce the Scorecard, we poll Brookings scholars and other experts on the present state of U.S. relations… Full Article
core Mapping racial inequity amid COVID-19 underscores policy discriminations against Black Americans By webfeeds.brookings.edu Published On :: Thu, 16 Apr 2020 14:56:07 +0000 A spate of recent news accounts reveals what many experts have feared: Black communities in the U.S. are experiencing some of the highest fatality rates from COVID-19. But without an understanding of the policy contexts that have shaped conditions in Black-majority neighborhoods, one may assume the rapid spread of the coronavirus there is caused by… Full Article
core Trans-Atlantic Scorecard – April 2020 By webfeeds.brookings.edu Published On :: Thu, 23 Apr 2020 15:12:26 +0000 Welcome to the seventh edition of the Trans-Atlantic Scorecard, a quarterly evaluation of U.S.-European relations produced by Brookings’s Center on the United States and Europe (CUSE), as part of the Brookings – Robert Bosch Foundation Transatlantic Initiative. To produce the Scorecard, we poll Brookings scholars and other experts on the present state of U.S. relations… Full Article
core Upcoming Brookings report and scorecard highlight pathways and progress toward financial inclusion By webfeeds.brookings.edu Published On :: Thu, 20 Aug 2015 07:30:00 -0400 Editor’s Note: Brookings will hold an event and live webcast on Wednesday, August 26 to discuss the findings of the 2015 Financial and Digital Inclusion (FDIP) Report and Scorecard. Follow the conversation on Twitter using #FinancialInclusion Access to affordable, quality financial services is vital both for ensuring the financial well-being of individuals and for fostering broader economic development. Yet about 2 billion adults around the world still do not have formal financial accounts. The Financial and Digital Inclusion Project (FDIP), launched within the Center for Technology Innovation at Brookings, set out to answer three key questions: Do country commitments make a difference in progress toward financial inclusion? To what extent do mobile and other digital technologies advance financial inclusion? What legal, policy, and regulatory approaches promote financial inclusion? To answer these questions, the FDIP team spent the past year examining how governments, private sector entities, non-government organizations, and the general public across 21 diverse countries have worked together to advance access to and usage of formal financial services. This research informed the development of the 2015 Report and Scorecard — the first in a 3-year series of research on the topic. For the 2015 Scorecard, FDIP researchers assessed 33 indicators across four dimensions of financial inclusion: Country commitment, mobile capacity, regulatory environment, and adoption of selected basic traditional and digital financial services. The 2015 FDIP Report and Scorecard provide detailed profiles of the financial inclusion landscape in 21 countries, focusing on mobile money and other digital financial services. On August 26, the Center for Technology Innovation will discuss the findings of the 2015 Report and Scorecard and host a conversation about key trends, opportunities, and obstacles surrounding financial inclusion among authorities from the public and private sectors. Register to attend the event in-person or by webcast, and join the conversation on Twitter at #FinancialInclusion. Authors Darrell M. WestJohn Villasenor Image Source: © Noor Khamis / Reuters Full Article
core Five key findings from the 2015 Financial and Digital Inclusion Project Report & Scorecard By webfeeds.brookings.edu Published On :: Wed, 02 Sep 2015 07:30:00 -0400 Editor’s note: This post is part of a series on the Brookings Financial and Digital Inclusion Project, which aims to measure access to and usage of financial services among individuals who have historically been disproportionately excluded from the formal financial system. To read the first annual FDIP report, learn more about the methodology, and watch the 2015 launch event, visit the 2015 Report and Scorecard webpage. Convenient access to banking infrastructure is something many people around the world take for granted. Yet while the number of people outside the formal financial system has substantially decreased in recent years, 2 billion adults still do not have an account with a formal financial institution or mobile money provider.1 This means that significant opportunities remain to provide access to and promote use of affordable financial services that can help people manage their financial lives more safely and efficiently. To learn more about how countries can facilitate greater financial inclusion among underserved groups, the Brookings Financial and Digital Inclusion Project (FDIP) sought to answer the following questions: (1) Do country commitments make a difference in progress toward financial inclusion?; (2) To what extent do mobile and other digital technologies advance financial inclusion; and (3) What legal, policy, and regulatory approaches promote financial inclusion? To address these questions, the FDIP team assessed 33 indicators of financial inclusion across 21 economically, geographically, and politically diverse countries that have all made recent commitments to advancing financial inclusion. Indicators fell within four key dimensions of financial inclusion: country commitment, mobile capacity, regulatory commitment, and adoption of selected traditional and digital financial services. In an effort to obtain the most accurate and up-to-date understanding of the financial inclusion landscape possible, the FDIP team engaged with a wide range of experts — including financial inclusion authorities in the FDIP focus countries — and also consulted international non-governmental organization publications, government documents, news sources, and supply and demand-side data sets. Our research led to 5 overarching findings. Country commitments matter. Not only did our 21 focus countries make commitments toward financial inclusion, but countries generally took these commitments seriously and made progress toward their goals. For example, the top five countries within the scorecard each completed at least one of their national-level financial inclusion targets. While correlation does not necessarily equal causation, our research supports findings by other financial inclusion experts that national-level country commitments are associated with greater financial inclusion progress. For example, the World Bank has noted that countries with national financial inclusion strategies have twice the average increase in the number of account holders as countries that do not have these strategies in place. The movement toward digital financial services will accelerate financial inclusion. Digital financial services can provide customers with greater security, privacy, and convenience than transacting via traditional “brick-and-mortar” banks. We predict that digital financial services such as mobile money will become increasingly prevalent across demographics, particularly as user-friendly smartphones become cheaper2 and more widespread.3 Mobile money has already driven financial inclusion, particularly in countries where traditional banking infrastructure is limited. For example, mobile money offerings in Kenya (particularly the widely popular M-Pesa service) are credited with advancing financial inclusion: The Global Financial Inclusion (Global Findex) database found that the percentage of adults with a formal account in Kenya increased from about 42 percent in 2011 to about 75 percent in 2014, with around 58 percent of adults in Kenya having used mobile money within the preceding 12 months as of 2014. Geography generally matters less than policy, legal, and regulatory changes, although some regional trends in terms of financial services provision are evident. Regional trends include the widespread use of banking agents (sometimes known as correspondents)4 in Latin America, in which retail outlets and other third parties are able to offer some financial services on behalf of banks,5 and the prevalence of mobile money in sub-Saharan Africa. However, these regional trends aren’t absolute: For example, post office branches have served as popular financial access points in South Africa,6 and the GSMA’s “2014 State of the Industry” report found that the highest growth in the number of mobile money accounts between December 2013 and December 2014 was in Latin America. Overall, we found high-performing countries across multiple regions and using multiple approaches, demonstrating that there are diverse pathways to achieving greater financial inclusion. Central banks, ministries of finance, ministries of communications, banks, non-bank financial providers, and mobile network operators have major roles in achieving greater financial inclusion. These entities should closely coordinate with respect to policy, regulatory, and technological advances. With the roles of public and private sector entities within the financial sector becoming increasingly intertwined, coordination across sectors is critical to developing coherent and effective policies. Countries that performed strongly on the country commitment and regulatory environment components of the FDIP Scorecard generally demonstrated close coordination among public and private sector entities that informed the emergence of an enabling regulatory framework. For example, Tanzania’s National Financial Inclusion Framework7 promotes competition and innovation within the financial services sector by reflecting both public and private sector voices.8 Full financial inclusion cannot be achieved without addressing the financial inclusion gender gap and accounting for diverse cultural contexts with respect to financial services. Persistent gender disparities in terms of access to and usage of formal financial services must be addressed in order to achieve financial inclusion. For example, Middle Eastern countries such as Afghanistan and Pakistan have demonstrated a significant gap in formal account ownership between men and women. Guardianship and inheritance laws concerning account opening and property ownership present cultural and legal barriers that contribute to this gender gap.9 Understanding diverse cultural contexts is also critical to advancing financial inclusion sustainably. In the Philippines, non-bank financial service providers such as pawn shops are popular venues for accessing financial services.10 Leveraging these providers as agents can therefore be a useful way to harness trust in these systems to increase financial inclusion. To dive deeper into the report’s findings and compare country rankings, visit the FDIP interactive. We also welcome feedback about the 2015 Report and Scorecard at FDIPComments@brookings.edu. 1 Asli Demirguc-Kunt, Leora Klapper, Dorothe Singer, and Peter Van Oudheusden, “The Global Findex Database 2014: Measuring Financial Inclusion around the World,” World Bank Policy Research Working Paper 7255, April 2015, VI, http://www-wds.worldbank.org/external/default/WDSContentServer/WDSP/IB/2015/04/15/090224b082dca3aa/1_0/Rendered/PDF/The0Global0Fin0ion0around0the0world.pdf#page=3. 2 Claire Scharwatt, Arunjay Katakam, Jennifer Frydrych, Alix Murphy, and Nika Naghavi, “2014 State of the Industry: Mobile Financial Services for the Unbanked,” GSMA, 2015, p. 24, http://www.gsma.com/mobilefordevelopment/wp-content/uploads/2015/03/SOTIR_2014.pdf. 3 GSMA Intelligence, “The Mobile Economy 2015,” 2015, pgs. 13-14, http://www.gsmamobileeconomy.com/GSMA_Global_Mobile_Economy_Report_2015.pdf. 4 Caitlin Sanford, “Do agents improve financial inclusion? Evidence from a national survey in Brazil,” Bankable Frontier Associates, November 2013, pg. 1, http://bankablefrontier.com/wp-content/uploads/documents/BFA-Focus-Note-Do-agents-improve-financial-inclusion-Brazil.pdf. 5 Alliance for Financial Inclusion, “Discussion paper: Agent banking in Latin America,” 2012, pg. 3, http://www.afi-global.org/sites/default/files/discussion_paper_-_agent_banking_latin_america.pdf. 6 The National Treasury, South Africa and the AFI Financial Inclusion Data Working Group, “The Use of Financial Inclusion Data Country Case Study: South Africa – The Mzansi Story and Beyond,” January 2014, http://www.afi-global.org/sites/default/files/publications/the_use_of_financial_inclusion_data_country_case_study_south_africa.pdf. 7 Tanzania National Council for Financial Inclusion, “National Financial Inclusion Framework: A Public-Private Stakeholders’ Initiative (2014-2016),” 2013, pgs. 19-22, http://www.afi-global.org/sites/default/files/publications/tanzania-national-financial-inclusion-framework-2014-2016.pdf. 8 Simone di Castri and Lara Gidvani, “Enabling Mobile Money Policies in Tanzania,” GSMA, February 2014, http://www.gsma.com/mobilefordevelopment/wp-content/uploads/2014/03/Tanzania-Enabling-Mobile-Money-Policies.pdf. 9 Mayada El-Zoghbi, “Mind the Gap: women and Access to Finance,” Consultative Group to Assist the Poor, 13 May 2015, http://www.cgap.org/blog/mind-gap-women-and-access-finance. 10 Xavier Martin and Amarnath Samarapally, “The Philippines: Marshalling Data, Policy, and a Diverse Industry for Financial Inclusion,” FINclusion Lab by MIX, June 2014, http://finclusionlab.org/blog/philippines-marshalling-data-policy-and-diverse-industry-financial-inclusion. Authors Robin LewisJohn VillasenorDarrell M. West Full Article
core Inclusion in India: Unpacking the 2015 FDIP Report and Scorecard By webfeeds.brookings.edu Published On :: Wed, 09 Sep 2015 07:30:00 -0400 Editor’s Note: The Center for Technology Innovation released the 2015 Financial and Digital Inclusion Project (FDIP) Report on August 26th. TechTank has previously covered the FDIP launch event and outlined the report’s overall findings. Over the next two months, TechTank will take a closer look at the report’s findings by country and by region, beginning with today’s post on India. With about 21 percent of the world’s entire unbanked adult population residing in India as of 2014, the country has tremendous opportunities for growth in terms of advancing access to and use of formal financial services. In the 2015 Financial and Digital Inclusion Project (FDIP) Report and Scorecard, we detail the progress achieved and possibilities remaining for India’s financial services ecosystem as it moves from a heavy reliance on cash to an array of traditional and digital financial services offered by diverse financial providers. As noted in the 2015 FDIP Report, government-led initiatives to promote financial inclusion have advanced access to financial services in India. Ownership of formal financial institution and mobile money accounts among adults in India increased about 18 percentage points between 2011 and 2014. Recent regulatory changes and public and private sector initiatives are expected to further promote use of these services. In this post, we unpack the four components of the 2015 FDIP Scorecard — country commitment, mobile capacity, regulatory environment, and adoption of traditional and digital financial services — to highlight India’s achievements and possible next steps toward greater financial inclusion. Country commitment: An unprecedented year with no sign of slowing India’s national-level commitment to promoting financial inclusion earned it a “country commitment” score of 100 percent. A historic government initiative helped India garner a top score: In August 2014, Prime Minister Narendra Modi launched the “Pradhan Mantri Jan-Dhan Yojana,” the Prime Minister’s People’s Wealth Scheme (PMJDY). This effort — arguably the largest financial inclusion initiative in the world — “envisages universal access to banking facilities with at least one basic banking account for every household, financial literacy, access to credit, insurance and pension facility,” in addition to providing beneficiaries with an RuPay debit card. As part of this effort, the program aimed to provide 75 million unbanked adults in India with accounts by late January 2015. As of September 2015, about 180 million accounts had been opened; about 44 percent of these accounts did not carry a balance, down from about 76 percent in September 2014. The PMJDY initiative is a component of the JAM Trinity, or “Jan-Dhan, Aadhaar and Mobile.” Under this approach, government transfers (also known as Direct Benefit Transfers, or DBT) will be channeled through bank accounts provided under Jan-Dhan, Aadhaar identification numbers or biometric IDs, and mobile phone numbers. The Pratyaksh Hanstantrit Labh (PaHaL) program is a major DBT initiative in which subsidies for liquefied petroleum gas can be linked to an Aadhaar number that is connected to a bank account or the consumer’s bank details. As of July 2015, about $2 billion had been channeled to beneficiaries in 130 million households across the country. Mobile capacity: Ample opportunity for digital services, but limited awareness and use India received 16th place (out of the 21 countries considered) in the 2015 FDIP Report and Scorecard’s mobile capacity ranking. India’s mobile money landscape features an extensive array of services, and the licensing of new payments banks (discussed below) may drive the entry of new players and products that can improve low levels of awareness and adoption of digital financial services. An InterMedia survey conducted from September to December 2014 found that while 86 percent of adults owned or could borrow a mobile phone, only about 13 percent of adults were aware of mobile money. Awareness of mobile money is increasing — the 13 percent figure is double that of the first wave of the survey, which concluded in January 2014 — but uptake remains low. The Global Financial Inclusion (Global Findex) database found only 2 percent of adults in India had a mobile money account in 2014. Implementing interoperability across mobile money offerings, increasing 3G network coverage by population, and enhancing unique mobile subscribership could boost India’s mobile capacity score in future editions of the FDIP report. Regulatory environment: Opening up the playing field to non-bank entities India tied for 7th place on the regulatory environment component of the 2015 Scorecard. The country’s recent shift to a more open financial landscape contributed to its strong score, although more time is needed to see how recent regulations will be operationalized. India has traditionally maintained tight restrictions with respect to which entities are involved in financial service provision. Non-banks could manage an agent network on behalf of a bank as business correspondents or issue “semi-closed” wallets that did not permit customers to withdraw funds without transferring them to a full-service bank account. These restrictions likely contributed to the country’s slow and limited adoption of mobile money services. However, 2014 brought significant changes to India’s regulatory landscape. The Reserve Bank of India’s November 2014 Payments Banks guidelines were heralded as a major step forward for increasing diversity in the financial services ecosystem. These guidelines marked a significant shift from India’s “bank-led” approach by providing opportunities for non-banks such as mobile network operators to leverage their distribution expertise to advance financial access and use among underserved groups. While these institutions cannot offer credit, they can distribute credit on behalf of a financial services provider. They may also distribute insurance and pension products, in addition to offering interest-bearing deposit accounts. We noted in the 2015 FDIP Report that timely approval of license applications for prospective payments banks, particularly mobile network operators, would be a valuable next step for India’s financial inclusion path. In August 2015, the Reserve Bank of India approved 11 applicants, including five mobile network operators, to launch payments banks within the next 18 months. As noted in Quartz India, the “underlying objective is to use these new banks to push for greater financial inclusion.” India has also made strides in terms of establishing proportionate “know-your-customer” requirements for financial entities, including payments banks. While India has made significant progress in terms of promoting a more enabling regulatory environment, room for improvement remains. For example, concerns have been raised regarding the low commission rate for banks distributing DBT, with many experts noting that a higher commission would enhance the ability of these banks to operate sustainably. Adoption: Access is improving, but promoting use is key India ranked 9th for the adoption component of the 2015 Scorecard. Recent studies have demonstrated that adoption of formal financial services among traditionally underserved groups is improving. For example, InterMedia surveys conducted in October 2013 to January 2014 and September to December 2014 found that the most significant increase in bank account ownership was among women, particularly women living below the poverty line. Still, further work is needed to close the gender gap in account ownership. As noted above, adoption of digital financial services such as mobile money is minimal compared with traditional bank accounts (0.3 percent compared with 55 percent, according to the September to December 2014 InterMedia survey); nonetheless, we believe that the introduction of payments banks, combined with government efforts to digitize transfers, will facilitate greater adoption of digital financial services. While PMJDY has successfully promoted ownership of bank accounts, incentivizing use of these services is critical for achieving true financial inclusion. Dormancy rates in India are high — about 43 percent of accounts had not been deposited into or withdrawn from in the previous 12 months, according to the 2014 Global Findex. More time may be needed for individuals to understand how their new accounts function and, equally importantly, how their new accounts are relevant to their daily lives. A February 2015 survey designed by India’s Ministry of Finance, MicroSave, and the Bill & Melinda Gates Foundation found about 86 percent of PMJDY account holders reported the account was their first bank account. While this survey is not nationally representative, it provides some context as to why efforts to promote trust in and understanding of these new accounts will be key to the success of the program. An opportunity for promoting adoption of digital financial services was highlighted during the public launch of the 2015 Report and Scorecard: As of June 2015, it was estimated that fewer than 6 percent of merchants in India accepted digital payments. The U.S. government is partnering with the government of India to promote the shift to digitizing transactions, including at merchants. The next annual FDIP Report will examine the outcomes of such initiatives as we assess India’s progress toward greater financial inclusion. Suggestions and other comments regarding the FDIP Report and Scorecard are welcomed at FDIPComments@brookings.edu. Authors Robin LewisJohn VillasenorDarrell M. West Image Source: © Mansi Thapliyal / Reuters Full Article
core Turkey’s Erdoğan scores a pyrrhic victory in Washington By webfeeds.brookings.edu Published On :: Mon, 18 Nov 2019 16:41:15 +0000 Turkish President Recep Tayyip Erdoğan received a warm welcome at the White House last Wednesday. But this facade of good relations between the two countries is highly deceiving. Indeed, any sense of victory Turkey might claim from the outwardly friendly visit with Donald Trump is an illusion. In reality, the two countries are wide apart… Full Article
core To British voters: Don’t score an own goal By webfeeds.brookings.edu Published On :: Mon, 30 Nov -0001 00:00:00 +0000 Those who advocate for a British exit from the European Union seem to think that they can turn back the clock on globalization. They can’t, writes Arturo Sarukhan, who outlines the problematic ripple effects that would likely come with Brexit. Full Article Uncategorized
core Trans-Atlantic Scorecard – July 2019 By webfeeds.brookings.edu Published On :: Thu, 18 Jul 2019 13:30:26 +0000 Welcome to the fourth edition of the Trans-Atlantic Scorecard, a quarterly evaluation of U.S.-European relations produced by Brookings’s Center on the United States and Europe (CUSE), as part of the Brookings – Robert Bosch Foundation Transatlantic Initiative. To produce the Scorecard, we poll Brookings scholars and other experts on the present state of U.S. relations… Full Article
core Trans-Atlantic Scorecard – October 2019 By webfeeds.brookings.edu Published On :: Wed, 23 Oct 2019 14:38:07 +0000 Welcome to the fifth edition of the Trans-Atlantic Scorecard, a quarterly evaluation of U.S.-European relations produced by Brookings’s Center on the United States and Europe (CUSE), as part of the Brookings – Robert Bosch Foundation Transatlantic Initiative. To produce the Scorecard, we poll Brookings scholars and other experts on the present state of U.S. relations… Full Article
core Trans-Atlantic Scorecard – April 2020 By webfeeds.brookings.edu Published On :: Thu, 23 Apr 2020 15:12:26 +0000 Welcome to the seventh edition of the Trans-Atlantic Scorecard, a quarterly evaluation of U.S.-European relations produced by Brookings’s Center on the United States and Europe (CUSE), as part of the Brookings – Robert Bosch Foundation Transatlantic Initiative. To produce the Scorecard, we poll Brookings scholars and other experts on the present state of U.S. relations… Full Article
core Trans-Atlantic Scorecard – October 2019 By webfeeds.brookings.edu Published On :: Wed, 23 Oct 2019 14:38:07 +0000 Welcome to the fifth edition of the Trans-Atlantic Scorecard, a quarterly evaluation of U.S.-European relations produced by Brookings’s Center on the United States and Europe (CUSE), as part of the Brookings – Robert Bosch Foundation Transatlantic Initiative. To produce the Scorecard, we poll Brookings scholars and other experts on the present state of U.S. relations… Full Article
core Trans-Atlantic Scorecard – January 2019 By webfeeds.brookings.edu Published On :: Fri, 18 Jan 2019 17:00:33 +0000 Welcome to the second edition of the Trans-Atlantic Scorecard, a quarterly evaluation of U.S.-European relations produced by Brookings’s Center on the United States and Europe (CUSE), as part of the Brookings – Robert Bosch Foundation Transatlantic Initiative. To produce the Scorecard, we poll Brookings scholars and other experts on the present state of U.S. relations… Full Article
core Trans-Atlantic Scorecard – April 2019 By webfeeds.brookings.edu Published On :: Fri, 19 Apr 2019 15:37:02 +0000 Welcome to the third edition of the Trans-Atlantic Scorecard, a quarterly evaluation of U.S.-European relations produced by Brookings’s Center on the United States and Europe (CUSE), as part of the Brookings – Robert Bosch Foundation Transatlantic Initiative. To produce the Scorecard, we poll Brookings scholars and other experts on the present state of U.S. relations… Full Article
core Trans-Atlantic Scorecard – October 2019 By webfeeds.brookings.edu Published On :: Wed, 23 Oct 2019 14:38:07 +0000 Welcome to the fifth edition of the Trans-Atlantic Scorecard, a quarterly evaluation of U.S.-European relations produced by Brookings’s Center on the United States and Europe (CUSE), as part of the Brookings – Robert Bosch Foundation Transatlantic Initiative. To produce the Scorecard, we poll Brookings scholars and other experts on the present state of U.S. relations… Full Article
core Trans-Atlantic Scorecard – January 2020 By webfeeds.brookings.edu Published On :: Welcome to the sixth edition of the Trans-Atlantic Scorecard, a quarterly evaluation of U.S.-European relations produced by Brookings’s Center on the United States and Europe (CUSE), as part of the Brookings – Robert Bosch Foundation Transatlantic Initiative. To produce the Scorecard, we poll Brookings scholars and other experts on the present state of U.S. relations… Full Article
core Trans-Atlantic Scorecard – October 2019 By webfeeds.brookings.edu Published On :: Wed, 23 Oct 2019 14:38:07 +0000 Welcome to the fifth edition of the Trans-Atlantic Scorecard, a quarterly evaluation of U.S.-European relations produced by Brookings’s Center on the United States and Europe (CUSE), as part of the Brookings – Robert Bosch Foundation Transatlantic Initiative. To produce the Scorecard, we poll Brookings scholars and other experts on the present state of U.S. relations… Full Article
core Trans-Atlantic Scorecard – April 2020 By webfeeds.brookings.edu Published On :: Thu, 23 Apr 2020 15:12:26 +0000 Welcome to the seventh edition of the Trans-Atlantic Scorecard, a quarterly evaluation of U.S.-European relations produced by Brookings’s Center on the United States and Europe (CUSE), as part of the Brookings – Robert Bosch Foundation Transatlantic Initiative. To produce the Scorecard, we poll Brookings scholars and other experts on the present state of U.S. relations… Full Article
core Trans-Atlantic Scorecard – September 2018 By webfeeds.brookings.edu Published On :: Mon, 17 Sep 2018 16:00:55 +0000 Welcome to the first edition of the Trans-Atlantic Scorecard, a new quarterly evaluation of U.S.-European relations produced by Brookings’s Center on the United States and Europe (CUSE), as part of the Brookings – Robert Bosch Foundation Transatlantic Initiative. To produce the Scorecard, we polled Brookings experts on the present state of U.S. relations with Europe—overall… Full Article
core Trans-Atlantic Scorecard – January 2019 By webfeeds.brookings.edu Published On :: Fri, 18 Jan 2019 17:00:33 +0000 Welcome to the second edition of the Trans-Atlantic Scorecard, a quarterly evaluation of U.S.-European relations produced by Brookings’s Center on the United States and Europe (CUSE), as part of the Brookings – Robert Bosch Foundation Transatlantic Initiative. To produce the Scorecard, we poll Brookings scholars and other experts on the present state of U.S. relations… Full Article
core Trans-Atlantic Scorecard – April 2019 By webfeeds.brookings.edu Published On :: Fri, 19 Apr 2019 15:37:02 +0000 Welcome to the third edition of the Trans-Atlantic Scorecard, a quarterly evaluation of U.S.-European relations produced by Brookings’s Center on the United States and Europe (CUSE), as part of the Brookings – Robert Bosch Foundation Transatlantic Initiative. To produce the Scorecard, we poll Brookings scholars and other experts on the present state of U.S. relations… Full Article
core Trans-Atlantic Scorecard – July 2019 By webfeeds.brookings.edu Published On :: Thu, 18 Jul 2019 13:30:26 +0000 Welcome to the fourth edition of the Trans-Atlantic Scorecard, a quarterly evaluation of U.S.-European relations produced by Brookings’s Center on the United States and Europe (CUSE), as part of the Brookings – Robert Bosch Foundation Transatlantic Initiative. To produce the Scorecard, we poll Brookings scholars and other experts on the present state of U.S. relations… Full Article
core Trans-Atlantic Scorecard – October 2019 By webfeeds.brookings.edu Published On :: Wed, 23 Oct 2019 14:38:07 +0000 Welcome to the fifth edition of the Trans-Atlantic Scorecard, a quarterly evaluation of U.S.-European relations produced by Brookings’s Center on the United States and Europe (CUSE), as part of the Brookings – Robert Bosch Foundation Transatlantic Initiative. To produce the Scorecard, we poll Brookings scholars and other experts on the present state of U.S. relations… Full Article
core Trans-Atlantic Scorecard – January 2020 By webfeeds.brookings.edu Published On :: Welcome to the sixth edition of the Trans-Atlantic Scorecard, a quarterly evaluation of U.S.-European relations produced by Brookings’s Center on the United States and Europe (CUSE), as part of the Brookings – Robert Bosch Foundation Transatlantic Initiative. To produce the Scorecard, we poll Brookings scholars and other experts on the present state of U.S. relations… Full Article
core Trans-Atlantic Scorecard – April 2020 By webfeeds.brookings.edu Published On :: Thu, 23 Apr 2020 15:12:26 +0000 Welcome to the seventh edition of the Trans-Atlantic Scorecard, a quarterly evaluation of U.S.-European relations produced by Brookings’s Center on the United States and Europe (CUSE), as part of the Brookings – Robert Bosch Foundation Transatlantic Initiative. To produce the Scorecard, we poll Brookings scholars and other experts on the present state of U.S. relations… Full Article
core First Steps Toward a Quality of Climate Finance Scorecard (QUODA-CF): Creating a Comparative Index to Assess International Climate Finance Contributions By webfeeds.brookings.edu Published On :: Executive Summary Are climate finance contributor countries, multilateral aid agencies and specialized funds using widely accepted best practices in foreign assistance? How is it possible to measure and compare international climate finance contributions when there are as yet no established metrics or agreed definitions of the quality of climate finance? As a subjective metric, quality… Full Article
core High Achievers, Tracking, and the Common Core By webfeeds.brookings.edu Published On :: Thu, 29 Jan 2015 09:00:00 -0500 A curriculum controversy is roiling schools in the San Francisco Bay Area. In the past few months, parents in the San Mateo-Foster City School District, located just south of San Francisco International Airport, voiced concerns over changes to the middle school math program. The changes were brought about by the Common Core State Standards (CCSS). Under previous policies, most eighth graders in the district took algebra I. Some very sharp math students, who had already completed algebra I in seventh grade, took geometry in eighth grade. The new CCSS-aligned math program will reduce eighth grade enrollments in algebra I and eliminate geometry altogether as a middle school course. A little background information will clarify the controversy. Eighth grade mathematics may be the single grade-subject combination most profoundly affected by the CCSS. In California, the push for most students to complete algebra I by the end of eighth grade has been a centerpiece of state policy, as it has been in several states influenced by the “Algebra for All” movement that began in the 1990s. Nationwide, in 1990, about 16 percent of all eighth graders reported that they were taking an algebra or geometry course. In 2013, the number was three times larger, and nearly half of all eighth graders (48 percent) were taking algebra or geometry.[i] When that percentage goes down, as it is sure to under the CCSS, what happens to high achieving math students? The parents who are expressing the most concern have kids who excel at math. One parent in San Mateo-Foster City told The San Mateo Daily Journal, “This is really holding the advanced kids back.”[ii] The CCSS math standards recommend a single math course for seventh grade, integrating several math topics, followed by a similarly integrated math course in eighth grade. Algebra I won’t be offered until ninth grade. The San Mateo-Foster City School District decided to adopt a “three years into two” accelerated option. This strategy is suggested on the Common Core website as an option that districts may consider for advanced students. It combines the curriculum from grades seven through nine (including algebra I) into a two year offering that students can take in seventh and eighth grades.[iii] The district will also provide—at one school site—a sequence beginning in sixth grade that compacts four years of math into three. Both accelerated options culminate in the completion of algebra I in eighth grade. The San Mateo-Foster City School District is home to many well-educated, high-powered professionals who work in Silicon Valley. They are unrelentingly liberal in their politics. Equity is a value they hold dear.[iv] They also know that completing at least one high school math course in middle school is essential for students who wish to take AP Calculus in their senior year of high school. As CCSS is implemented across the nation, administrators in districts with demographic profiles similar to San Mateo-Foster City will face parents of mathematically precocious kids asking whether the “common” in Common Core mandates that all students take the same math course. Many of those districts will respond to their constituents and provide accelerated pathways (“pathway” is CCSS jargon for course sequence). But other districts will not. Data show that urban schools, schools with large numbers of black and Hispanic students, and schools located in impoverished neighborhoods are reluctant to differentiate curriculum. It is unlikely that gifted math students in those districts will be offered an accelerated option under CCSS. The reason why can be summed up in one word: tracking. Tracking in eighth grade math means providing different courses to students based on their prior math achievement. The term “tracking” has been stigmatized, coming under fire for being inequitable. Historically, where tracking existed, black, Hispanic, and disadvantaged students were often underrepresented in high-level math classes; white, Asian, and middle-class students were often over-represented. An anti-tracking movement gained a full head of steam in the 1980s. Tracking reformers knew that persuading high schools to de-track was hopeless. Consequently, tracking’s critics focused reform efforts on middle schools, urging that they group students heterogeneously with all students studying a common curriculum. That approach took hold in urban districts, but not in the suburbs. Now the Common Core and de-tracking are linked. Providing an accelerated math track for high achievers has become a flashpoint throughout the San Francisco Bay Area. An October 2014 article in The San Jose Mercury News named Palo Alto, Saratoga, Cupertino, Pleasanton, and Los Gatos as districts that have announced, in response to parent pressure, that they are maintaining an accelerated math track in middle schools. These are high-achieving, suburban districts. Los Gatos parents took to the internet with a petition drive when a rumor spread that advanced courses would end. Ed Source reports that 900 parents signed a petition opposing the move and board meetings on the issue were packed with opponents. The accelerated track was kept. Piedmont established a single track for everyone, but allowed parents to apply for an accelerated option. About twenty five percent did so. The Mercury News story underscores the demographic pattern that is unfolding and asks whether CCSS “could cement a two-tier system, with accelerated math being the norm in wealthy areas and the exception elsewhere.” What is CCSS’s real role here? Does the Common Core take an explicit stand on tracking? Not really. But de-tracking advocates can interpret the “common” in Common Core as license to eliminate accelerated tracks for high achievers. As a noted CCSS supporter (and tracking critic), William H. Schmidt, has stated, “By insisting on common content for all students at each grade level and in every community, the Common Core mathematics standards are in direct conflict with the concept of tracking.”[v] Thus, tracking joins other controversial curricular ideas—e.g., integrated math courses instead of courses organized by content domains such as algebra and geometry; an emphasis on “deep,” conceptual mathematics over learning procedures and basic skills—as “dog whistles” embedded in the Common Core. Controversial positions aren’t explicitly stated, but they can be heard by those who want to hear them. CCSS doesn’t have to take an outright stand on these debates in order to have an effect on policy. For the practical questions that local grouping policies resolve—who takes what courses and when do they take them—CCSS wipes the slate clean. There are plenty of people ready to write on that blank slate, particularly administrators frustrated by unsuccessful efforts to de-track in the past Suburban parents are mobilized in defense of accelerated options for advantaged students. What about kids who are outstanding math students but also happen to be poor, black, or Hispanic? What happens to them, especially if they attend schools in which the top institutional concern is meeting the needs of kids functioning several years below grade level? I presented a paper on this question at a December 2014 conference held by the Fordham Institute in Washington, DC. I proposed a pilot program of “tracking for equity.” By that term, I mean offering black, Hispanic, and poor high achievers the same opportunity that the suburban districts in the Bay Area are offering. High achieving middle school students in poor neighborhoods would be able to take three years of math in two years and proceed on a path toward AP Calculus as high school seniors. It is true that tracking must be done carefully. Tracking can be conducted unfairly and has been used unjustly in the past. One of the worst consequences of earlier forms of tracking was that low-skilled students were tracked into dead end courses that did nothing to help them academically. These low-skilled students were disproportionately from disadvantaged communities or communities of color. That’s not a danger in the proposal I am making. The default curriculum, the one every student would take if not taking the advanced track, would be the Common Core. If that’s a dead end for low achievers, Common Core supporters need to start being more honest in how they are selling the CCSS. Moreover, to ensure that the policy gets to the students for whom it is intended, I have proposed running the pilot program in schools predominantly populated by poor, black, or Hispanic students. The pilot won’t promote segregation within schools because the sad reality is that participating schools are already segregated. Since I presented the paper, I have privately received negative feedback from both Algebra for All advocates and Common Core supporters. That’s disappointing. Because of their animus toward tracking, some critics seem to support a severe policy swing from Algebra for All, which was pursued for equity, to Algebra for None, which will be pursued for equity. It’s as if either everyone or no one should be allowed to take algebra in eighth grade. The argument is that allowing only some eighth graders to enroll in algebra is elitist, even if the students in question are poor students of color who are prepared for the course and likely to benefit from taking it. The controversy raises crucial questions about the Common Core. What’s common in the common core? Is it the curriculum? And does that mean the same curriculum for all? Will CCSS serve as a curricular floor, ensuring all students are exposed to a common body of knowledge and skills? Or will it serve as a ceiling, limiting the progress of bright students so that their achievement looks more like that of their peers? These questions will be answered differently in different communities, and as they are, the inequities that Common Core supporters think they’re addressing may surface again in a profound form. [i] Loveless, T. (2008). The 2008 Brown Center Report on American Education. Retrieved from http://www.brookings.edu/research/reports/2009/02/25-education-loveless. For San Mateo-Foster City’s sequence of math courses, see: page 10 of http://smfc-ca.schoolloop.com/file/1383373423032/1229222942231/1242346905166154769.pdf [ii] Swartz, A. (2014, November 22). “Parents worry over losing advanced math classes: San Mateo-Foster City Elementary School District revamps offerings because of Common Core.” San Mateo Daily Journal. Retrieved from http://www.smdailyjournal.com/articles/lnews/2014-11-22/parents-worry-over-losing-advanced-math-classes-san-mateo-foster-city-elementary-school-district-revamps-offerings-because-of-common-core/1776425133822.html [iii] Swartz, A. (2014, December 26). “Changing Classes Concern for parents, teachers: Administrators say Common Core Standards Reason for Modifications.” San Mateo Daily Journal. Retrieved from http://www.smdailyjournal.com/articles/lnews/2014-12-26/changing-classes-concern-for-parents-teachers-administrators-say-common-core-standards-reason-for-modifications/1776425135624.html [iv] In the 2014 election, Jerry Brown (D) took 75% of Foster City’s votes for governor. In the 2012 presidential election, Barak Obama received 71% of the vote. http://www.city-data.com/city/Foster-City-California.html [v] Schmidt, W.H. and Burroughs, N.A. (2012) “How the Common Core Boosts Quality and Equality.” Educational Leadership, December 2012/January 2013. Vol. 70, No. 4, pp. 54-58. Authors Tom Loveless Full Article
core Measuring effects of the Common Core By webfeeds.brookings.edu Published On :: Tue, 24 Mar 2015 00:00:00 -0400 Part II of the 2015 Brown Center Report on American Education Over the next several years, policy analysts will evaluate the impact of the Common Core State Standards (CCSS) on U.S. education. The task promises to be challenging. The question most analysts will focus on is whether the CCSS is good or bad policy. This section of the Brown Center Report (BCR) tackles a set of seemingly innocuous questions compared to the hot-button question of whether Common Core is wise or foolish. The questions all have to do with when Common Core actually started, or more precisely, when the Common Core started having an effect on student learning. And if it hasn’t yet had an effect, how will we know that CCSS has started to influence student achievement? The analysis below probes this issue empirically, hopefully persuading readers that deciding when a policy begins is elemental to evaluating its effects. The question of a policy’s starting point is not always easy to answer. Yet the answer has consequences. You can’t figure out whether a policy worked or not unless you know when it began.[i] The analysis uses surveys of state implementation to model different CCSS starting points for states and produces a second early report card on how CCSS is doing. The first report card, focusing on math, was presented in last year’s BCR. The current study updates state implementation ratings that were presented in that report and extends the analysis to achievement in reading. The goal is not only to estimate CCSS’s early impact, but also to lay out a fair approach for establishing when the Common Core’s impact began—and to do it now before data are generated that either critics or supporters can use to bolster their arguments. The experience of No Child Left Behind (NCLB) illustrates this necessity. Background After the 2008 National Assessment of Educational Progress (NAEP) scores were released, former Secretary of Education Margaret Spellings claimed that the new scores showed “we are on the right track.”[ii] She pointed out that NAEP gains in the previous decade, 1999-2009, were much larger than in prior decades. Mark Schneider of the American Institutes of Research (and a former Commissioner of the National Center for Education Statistics [NCES]) reached a different conclusion. He compared NAEP gains from 1996-2003 to 2003-2009 and declared NCLB’s impact disappointing. “The pre-NCLB gains were greater than the post-NCLB gains.”[iii] It is important to highlight that Schneider used the 2003 NAEP scores as the starting point for assessing NCLB. A report from FairTest on the tenth anniversary of NCLB used the same demarcation for pre- and post-NCLB time frames.[iv] FairTest is an advocacy group critical of high stakes testing—and harshly critical of NCLB—but if the 2003 starting point for NAEP is accepted, its conclusion is indisputable, “NAEP score improvement slowed or stopped in both reading and math after NCLB was implemented.” Choosing 2003 as NCLB’s starting date is intuitively appealing. The law was introduced, debated, and passed by Congress in 2001. President Bush signed NCLB into law on January 8, 2002. It takes time to implement any law. The 2003 NAEP is arguably the first chance that the assessment had to register NCLB’s effects. Selecting 2003 is consequential, however. Some of the largest gains in NAEP’s history were registered between 2000 and 2003. Once 2003 is established as a starting point (or baseline), pre-2003 gains become “pre-NCLB.” But what if the 2003 NAEP scores were influenced by NCLB? Experiments evaluating the effects of new drugs collect baseline data from subjects before treatment, not after the treatment has begun. Similarly, evaluating the effects of public policies require that baseline data are not influenced by the policies under evaluation. Avoiding such problems is particularly difficult when state or local policies are adopted nationally. The federal effort to establish a speed limit of 55 miles per hour in the 1970s is a good example. Several states already had speed limits of 55 mph or lower prior to the federal law’s enactment. Moreover, a few states lowered speed limits in anticipation of the federal limit while the bill was debated in Congress. On the day President Nixon signed the bill into law—January 2, 1974—the Associated Press reported that only 29 states would be required to lower speed limits. Evaluating the effects of the 1974 law with national data but neglecting to adjust for what states were already doing would obviously yield tainted baseline data. There are comparable reasons for questioning 2003 as a good baseline for evaluating NCLB’s effects. The key components of NCLB’s accountability provisions—testing students, publicizing the results, and holding schools accountable for results—were already in place in nearly half the states. In some states they had been in place for several years. The 1999 iteration of Quality Counts, Education Week’s annual report on state-level efforts to improve public education, entitled Rewarding Results, Punishing Failure, was devoted to state accountability systems and the assessments underpinning them. Testing and accountability are especially important because they have drawn fire from critics of NCLB, a law that wasn’t passed until years later. The Congressional debate of NCLB legislation took all of 2001, allowing states to pass anticipatory policies. Derek Neal and Diane Whitmore Schanzenbach reported that “with the passage of NCLB lurking on the horizon,” Illinois placed hundreds of schools on a watch list and declared that future state testing would be high stakes.[v] In the summer and fall of 2002, with NCLB now the law of the land, state after state released lists of schools falling short of NCLB’s requirements. Then the 2002-2003 school year began, during which the 2003 NAEP was administered. Using 2003 as a NAEP baseline assumes that none of these activities—previous accountability systems, public lists of schools in need of improvement, anticipatory policy shifts—influenced achievement. That is unlikely.[vi] The Analysis Unlike NCLB, there was no “pre-CCSS” state version of Common Core. States vary in how quickly and aggressively they have implemented CCSS. For the BCR analyses, two indexes were constructed to model CCSS implementation. They are based on surveys of state education agencies and named for the two years that the surveys were conducted. The 2011 survey reported the number of programs (e.g., professional development, new materials) on which states reported spending federal funds to implement CCSS. Strong implementers spent money on more activities. The 2011 index was used to investigate eighth grade math achievement in the 2014 BCR. A new implementation index was created for this year’s study of reading achievement. The 2013 index is based on a survey asking states when they planned to complete full implementation of CCSS in classrooms. Strong states aimed for full implementation by 2012-2013 or earlier. Fourth grade NAEP reading scores serve as the achievement measure. Why fourth grade and not eighth? Reading instruction is a key activity of elementary classrooms but by eighth grade has all but disappeared. What remains of “reading” as an independent subject, which has typically morphed into the study of literature, is subsumed under the English-Language Arts curriculum, a catchall term that also includes writing, vocabulary, listening, and public speaking. Most students in fourth grade are in self-contained classes; they receive instruction in all subjects from one teacher. The impact of CCSS on reading instruction—the recommendation that non-fiction take a larger role in reading materials is a good example—will be concentrated in the activities of a single teacher in elementary schools. The burden for meeting CCSS’s press for non-fiction, on the other hand, is expected to be shared by all middle and high school teachers.[vii] Results Table 2-1 displays NAEP gains using the 2011 implementation index. The four year period between 2009 and 2013 is broken down into two parts: 2009-2011 and 2011-2013. Nineteen states are categorized as “strong” implementers of CCSS on the 2011 index, and from 2009-2013, they outscored the four states that did not adopt CCSS by a little more than one scale score point (0.87 vs. -0.24 for a 1.11 difference). The non-adopters are the logical control group for CCSS, but with only four states in that category—Alaska, Nebraska, Texas, and Virginia—it is sensitive to big changes in one or two states. Alaska and Texas both experienced a decline in fourth grade reading scores from 2009-2013. The 1.11 point advantage in reading gains for strong CCSS implementers is similar to the 1.27 point advantage reported last year for eighth grade math. Both are small. The reading difference in favor of CCSS is equal to approximately 0.03 standard deviations of the 2009 baseline reading score. Also note that the differences were greater in 2009-2011 than in 2011-2013 and that the “medium” implementers performed as well as or better than the strong implementers over the entire four year period (gain of 0.99). Table 2-2 displays calculations using the 2013 implementation index. Twelve states are rated as strong CCSS implementers, seven fewer than on the 2011 index.[viii] Data for the non-adopters are the same as in the previous table. In 2009-2013, the strong implementers gained 1.27 NAEP points compared to -0.24 among the non-adopters, a difference of 1.51 points. The thirty-four states rated as medium implementers gained 0.82. The strong implementers on this index are states that reported full implementation of CCSS-ELA by 2013. Their larger gain in 2011-2013 (1.08 points) distinguishes them from the strong implementers in the previous table. The overall advantage of 1.51 points over non-adopters represents about 0.04 standard deviations of the 2009 NAEP reading score, not a difference with real world significance. Taken together, the 2011 and 2013 indexes estimate that NAEP reading gains from 2009-2013 were one to one and one-half scale score points larger in the strong CCSS implementation states compared to the states that did not adopt CCSS. Common Core and Reading Content As noted above, the 2013 implementation index is based on when states scheduled full implementation of CCSS in classrooms. Other than reading achievement, does the index seem to reflect changes in any other classroom variable believed to be related to CCSS implementation? If the answer is “yes,” that would bolster confidence that the index is measuring changes related to CCSS implementation. Let’s examine the types of literature that students encounter during instruction. Perhaps the most controversial recommendation in the CCSS-ELA standards is the call for teachers to shift the content of reading materials away from stories and other fictional forms of literature in favor of more non-fiction. NAEP asks fourth grade teachers the extent to which they teach fiction and non-fiction over the course of the school year (see Figure 2-1). Historically, fiction dominates fourth grade reading instruction. It still does. The percentage of teachers reporting that they teach fiction to a “large extent” exceeded the percentage answering “large extent” for non-fiction by 23 points in 2009 and 25 points in 2011. In 2013, the difference narrowed to only 15 percentage points, primarily because of non-fiction’s increased use. Fiction still dominated in 2013, but not by as much as in 2009. The differences reported in Table 2-3 are national indicators of fiction’s declining prominence in fourth grade reading instruction. What about the states? We know that they were involved to varying degrees with the implementation of Common Core from 2009-2013. Is there evidence that fiction’s prominence was more likely to weaken in states most aggressively pursuing CCSS implementation? Table 2-3 displays the data tackling that question. Fourth grade teachers in strong implementation states decisively favored the use of fiction over non-fiction in 2009 and 2011. But the prominence of fiction in those states experienced a large decline in 2013 (-12.4 percentage points). The decline for the entire four year period, 2009-2013, was larger in the strong implementation states (-10.8) than in the medium implementation (-7.5) or non-adoption states (-9.8). Conclusion This section of the Brown Center Report analyzed NAEP data and two indexes of CCSS implementation, one based on data collected in 2011, the second from data collected in 2013. NAEP scores for 2009-2013 were examined. Fourth grade reading scores improved by 1.11 scale score points in states with strong implementation of CCSS compared to states that did not adopt CCSS. A similar comparison in last year’s BCR found a 1.27 point difference on NAEP’s eighth grade math test, also in favor of states with strong implementation of CCSS. These differences, although certainly encouraging to CCSS supporters, are quite small, amounting to (at most) 0.04 standard deviations (SD) on the NAEP scale. A threshold of 0.20 SD—five times larger—is often invoked as the minimum size for a test score change to be regarded as noticeable. The current study’s findings are also merely statistical associations and cannot be used to make causal claims. Perhaps other factors are driving test score changes, unmeasured by NAEP or the other sources of data analyzed here. The analysis also found that fourth grade teachers in strong implementation states are more likely to be shifting reading instruction from fiction to non-fiction texts. That trend should be monitored closely to see if it continues. Other events to keep an eye on as the Common Core unfolds include the following: 1. The 2015 NAEP scores, typically released in the late fall, will be important for the Common Core. In most states, the first CCSS-aligned state tests will be given in the spring of 2015. Based on the earlier experiences of Kentucky and New York, results are expected to be disappointing. Common Core supporters can respond by explaining that assessments given for the first time often produce disappointing results. They will also claim that the tests are more rigorous than previous state assessments. But it will be difficult to explain stagnant or falling NAEP scores in an era when implementing CCSS commands so much attention. 2. Assessment will become an important implementation variable in 2015 and subsequent years. For analysts, the strategy employed here, modeling different indicators based on information collected at different stages of implementation, should become even more useful. Some states are planning to use Smarter Balanced Assessments, others are using the Partnership for Assessment of Readiness for College and Careers (PARCC), and still others are using their own homegrown tests. To capture variation among the states on this important dimension of implementation, analysts will need to use indicators that are up-to-date. 3. The politics of Common Core injects a dynamic element into implementation. The status of implementation is constantly changing. States may choose to suspend, to delay, or to abandon CCSS. That will require analysts to regularly re-configure which states are considered “in” Common Core and which states are “out.” To further complicate matters, states may be “in” some years and “out” in others. A final word. When the 2014 BCR was released, many CCSS supporters commented that it is too early to tell the effects of Common Core. The point that states may need more time operating under CCSS to realize its full effects certainly has merit. But that does not discount everything states have done so far—including professional development, purchasing new textbooks and other instructional materials, designing new assessments, buying and installing computer systems, and conducting hearings and public outreach—as part of implementing the standards. Some states are in their fifth year of implementation. It could be that states need more time, but innovations can also produce their biggest “pop” earlier in implementation rather than later. Kentucky was one of the earliest states to adopt and implement CCSS. That state’s NAEP fourth grade reading score declined in both 2009-2011 and 2011-2013. The optimism of CCSS supporters is understandable, but a one and a half point NAEP gain might be as good as it gets for CCSS. [i] These ideas were first introduced in a 2013 Brown Center Chalkboard post I authored, entitled, “When Does a Policy Start?” [ii] Maria Glod, “Since NCLB, Math and Reading Scores Rise for Ages 9 and 13,” Washington Post, April 29, 2009. [iii] Mark Schneider, “NAEP Math Results Hold Bad News for NCLB,” AEIdeas (Washington, D.C.: American Enterprise Institute, 2009). [iv] Lisa Guisbond with Monty Neill and Bob Schaeffer, NCLB’s Lost Decade for Educational Progress: What Can We Learn from this Policy Failure? (Jamaica Plain, MA: FairTest, 2012). [v] Derek Neal and Diane Schanzenbach, “Left Behind by Design: Proficiency Counts and Test-Based Accountability,” NBER Working Paper No. W13293 (Cambridge: National Bureau of Economic Research, 2007), 13. [vi] Careful analysts of NCLB have allowed different states to have different starting dates: see Thomas Dee and Brian A. Jacob, “Evaluating NCLB,” Education Next 10, no. 3 (Summer 2010); Manyee Wong, Thomas D. Cook, and Peter M. Steiner, “No Child Left Behind: An Interim Evaluation of Its Effects on Learning Using Two Interrupted Time Series Each with Its Own Non-Equivalent Comparison Series,” Working Paper 09-11 (Evanston, IL: Northwestern University Institute for Policy Research, 2009). [vii] Common Core State Standards Initiative. “English Language Arts Standards, Key Design Consideration.” Retrieved from: http://www.corestandards.org/ELA-Literacy/introduction/key-design-consideration/ [viii] Twelve states shifted downward from strong to medium and five states shifted upward from medium to strong, netting out to a seven state swing. « Part I: Girls, boys, and reading Part III: Student Engagement » Downloads Download the report Authors Tom Loveless Full Article
core Common Core and classroom instruction: The good, the bad, and the ugly By webfeeds.brookings.edu Published On :: Thu, 14 May 2015 00:00:00 -0400 This post continues a series begun in 2014 on implementing the Common Core State Standards (CCSS). The first installment introduced an analytical scheme investigating CCSS implementation along four dimensions: curriculum, instruction, assessment, and accountability. Three posts focused on curriculum. This post turns to instruction. Although the impact of CCSS on how teachers teach is discussed, the post is also concerned with the inverse relationship, how decisions that teachers make about instruction shape the implementation of CCSS. A couple of points before we get started. The previous posts on curriculum led readers from the upper levels of the educational system—federal and state policies—down to curricular decisions made “in the trenches”—in districts, schools, and classrooms. Standards emanate from the top of the system and are produced by politicians, policymakers, and experts. Curricular decisions are shared across education’s systemic levels. Instruction, on the other hand, is dominated by practitioners. The daily decisions that teachers make about how to teach under CCSS—and not the idealizations of instruction embraced by upper-level authorities—will ultimately determine what “CCSS instruction” really means. I ended the last post on CCSS by describing how curriculum and instruction can be so closely intertwined that the boundary between them is blurred. Sometimes stating a precise curricular objective dictates, or at least constrains, the range of instructional strategies that teachers may consider. That post focused on English-Language Arts. The current post focuses on mathematics in the elementary grades and describes examples of how CCSS will shape math instruction. As a former elementary school teacher, I offer my own personal opinion on these effects. The Good Certain aspects of the Common Core, when implemented, are likely to have a positive impact on the instruction of mathematics. For example, Common Core stresses that students recognize fractions as numbers on a number line. The emphasis begins in third grade: CCSS.MATH.CONTENT.3.NF.A.2 Understand a fraction as a number on the number line; represent fractions on a number line diagram. CCSS.MATH.CONTENT.3.NF.A.2.A Represent a fraction 1/b on a number line diagram by defining the interval from 0 to 1 as the whole and partitioning it into b equal parts. Recognize that each part has size 1/b and that the endpoint of the part based at 0 locates the number 1/b on the number line. CCSS.MATH.CONTENT.3.NF.A.2.B Represent a fraction a/b on a number line diagram by marking off a lengths 1/b from 0. Recognize that the resulting interval has size a/b and that its endpoint locates the number a/b on the number line. When I first read this section of the Common Core standards, I stood up and cheered. Berkeley mathematician Hung-Hsi Wu has been working with teachers for years to get them to understand the importance of using number lines in teaching fractions.[1] American textbooks rely heavily on part-whole representations to introduce fractions. Typically, students see pizzas and apples and other objects—typically other foods or money—that are divided up into equal parts. Such models are limited. They work okay with simple addition and subtraction. Common denominators present a bit of a challenge, but ½ pizza can be shown to be also 2/4, a half dollar equal to two quarters, and so on. With multiplication and division, all the little tricks students learned with whole number arithmetic suddenly go haywire. Students are accustomed to the fact that multiplying two whole numbers yields a product that is larger than either number being multiplied: 4 X 5 = 20 and 20 is larger than both 4 and 5.[2] How in the world can ¼ X 1/5 = 1/20, a number much smaller than either 1/4or 1/5? The part-whole representation has convinced many students that fractions are not numbers. Instead, they are seen as strange expressions comprising two numbers with a small horizontal bar separating them. I taught sixth grade but occasionally visited my colleagues’ classes in the lower grades. I recall one exchange with second or third graders that went something like this: “Give me a number between seven and nine.” Giggles. “Eight!” they shouted. “Give me a number between two and three.” Giggles. “There isn’t one!” they shouted. “Really?” I’d ask and draw a number line. After spending some time placing whole numbers on the number line, I’d observe, “There’s a lot of space between two and three. Is it just empty?” Silence. Puzzled little faces. Then a quiet voice. “Two and a half?” You have no idea how many children do not make the transition to understanding fractions as numbers and because of stumbling at this crucial stage, spend the rest of their careers as students of mathematics convinced that fractions are an impenetrable mystery. And that’s not true of just students. California adopted a test for teachers in the 1980s, the California Basic Educational Skills Test (CBEST). Beginning in 1982, even teachers already in the classroom had to pass it. I made a nice after-school and summer income tutoring colleagues who didn’t know fractions from Fermat’s Last Theorem. To be fair, primary teachers, teaching kindergarten or grades 1-2, would not teach fractions as part of their math curriculum and probably hadn’t worked with a fraction in decades. So they are no different than non-literary types who think Hamlet is just a play about a young guy who can’t make up his mind, has a weird relationship with his mother, and winds up dying at the end. Division is the most difficult operation to grasp for those arrested at the part-whole stage of understanding fractions. A problem that Liping Ma posed to teachers is now legendary.[3] She asked small groups of American and Chinese elementary teachers to divide 1 ¾ by ½ and to create a word problem that illustrates the calculation. All 72 Chinese teachers gave the correct answer and 65 developed an appropriate word problem. Only nine of the 23 American teachers solved the problem correctly. A single American teacher was able to devise an appropriate word problem. Granted, the American sample was not selected to be representative of American teachers as a whole, but the stark findings of the exercise did not shock anyone who has worked closely with elementary teachers in the U.S. They are often weak at math. Many of the teachers in Ma’s study had vague ideas of an “invert and multiply” rule but lacked a conceptual understanding of why it worked. A linguistic convention exacerbates the difficulty. Students may cling to the mistaken notion that “dividing in half” means “dividing by one-half.” It does not. Dividing in half means dividing by two. The number line can help clear up such confusion. Consider a basic, whole-number division problem for which third graders will already know the answer: 8 divided by 2 equals 4. It is evident that a segment 8 units in length (measured from 0 to 8) is divided by a segment 2 units in length (measured from 0 to 2) exactly 4 times. Modeling 12 divided by 2 and other basic facts with 2 as a divisor will convince students that whole number division works quite well on a number line. Now consider the number ½ as a divisor. It will become clear to students that 8 divided by ½ equals 16, and they can illustrate that fact on a number line by showing how a segment ½ units in length divides a segment 8 units in length exactly 16 times; it divides a segment 12 units in length 24 times; and so on. Students will be relieved to discover that on a number line division with fractions works the same as division with whole numbers. Now, let’s return to Liping Ma’s problem: 1 ¾ divided by ½. This problem would not be presented in third grade, but it might be in fifth or sixth grades. Students who have been working with fractions on a number line for two or three years will have little trouble solving it. They will see that the problem simply asks them to divide a line segment of 1 3/4 units by a segment of ½ units. The answer is 3 ½ . Some students might estimate that the solution is between 3 and 4 because 1 ¾ lies between 1 ½ and 2, which on the number line are the points at which the ½ unit segment, laid end on end, falls exactly three and four times. Other students will have learned about reciprocals and that multiplication and division are inverse operations. They will immediately grasp that dividing by ½ is the same as multiplying by 2—and since 1 ¾ x 2 = 3 ½, that is the answer. Creating a word problem involving string or rope or some other linearly measured object is also surely within their grasp. Conclusion I applaud the CCSS for introducing number lines and fractions in third grade. I believe it will instill in children an important idea: fractions are numbers. That foundational understanding will aid them as they work with more abstract representations of fractions in later grades. Fractions are a monumental barrier for kids who struggle with math, so the significance of this contribution should not be underestimated. I mentioned above that instruction and curriculum are often intertwined. I began this series of posts by defining curriculum as the “stuff” of learning—the content of what is taught in school, especially as embodied in the materials used in instruction. Instruction refers to the “how” of teaching—how teachers organize, present, and explain those materials. It’s each teacher’s repertoire of instructional strategies and techniques that differentiates one teacher from another even as they teach the same content. Choosing to use a number line to teach fractions is obviously an instructional decision, but it also involves curriculum. The number line is mathematical content, not just a teaching tool. Guiding third grade teachers towards using a number line does not guarantee effective instruction. In fact, it is reasonable to expect variation in how teachers will implement the CCSS standards listed above. A small body of research exists to guide practice. One of the best resources for teachers to consult is a practice guide published by the What Works Clearinghouse: Developing Effective Fractions Instruction for Kindergarten Through Eighth Grade (see full disclosure below).[4] The guide recommends the use of number lines as its second recommendation, but it also states that the evidence supporting the effectiveness of number lines in teaching fractions is inferred from studies involving whole numbers and decimals. We need much more research on how and when number lines should be used in teaching fractions. Professor Wu states the following, “The shift of emphasis from models of a fraction in the initial stage to an almost exclusive model of a fraction as a point on the number line can be done gradually and gracefully beginning somewhere in grade four. This shift is implicit in the Common Core Standards.”[5] I agree, but the shift is also subtle. CCSS standards include the use of other representations—fraction strips, fraction bars, rectangles (which are excellent for showing multiplication of two fractions) and other graphical means of modeling fractions. Some teachers will manage the shift to number lines adroitly—and others will not. As a consequence, the quality of implementation will vary from classroom to classroom based on the instructional decisions that teachers make. The current post has focused on what I believe to be a positive aspect of CCSS based on the implementation of the standards through instruction. Future posts in the series—covering the “bad” and the “ugly”—will describe aspects of instruction on which I am less optimistic. [1] See H. Wu (2014). “Teaching Fractions According to the Common Core Standards,” https://math.berkeley.edu/~wu/CCSS-Fractions_1.pdf. Also see "What's Sophisticated about Elementary Mathematics?" http://www.aft.org/sites/default/files/periodicals/wu_0.pdf [2] Students learn that 0 and 1 are exceptions and have their own special rules in multiplication. [3] Liping Ma, Knowing and Teaching Elementary Mathematics. [4] The practice guide can be found at: http://ies.ed.gov/ncee/wwc/pdf/practice_guides/fractions_pg_093010.pdf I serve as a content expert in elementary mathematics for the What Works Clearinghouse. I had nothing to do, however, with the publication cited. [5] Wu, page 3. Authors Tom Loveless Full Article
core Implementing Common Core: The problem of instructional time By webfeeds.brookings.edu Published On :: Thu, 09 Jul 2015 00:00:00 -0400 This is part two of my analysis of instruction and Common Core’s implementation. I dubbed the three-part examination of instruction “The Good, The Bad, and the Ugly.” Having discussed “the “good” in part one, I now turn to “the bad.” One particular aspect of the Common Core math standards—the treatment of standard algorithms in whole number arithmetic—will lead some teachers to waste instructional time. A Model of Time and Learning In 1963, psychologist John B. Carroll published a short essay, “A Model of School Learning” in Teachers College Record. Carroll proposed a parsimonious model of learning that expressed the degree of learning (or what today is commonly called achievement) as a function of the ratio of time spent on learning to the time needed to learn. The numerator, time spent learning, has also been given the term opportunity to learn. The denominator, time needed to learn, is synonymous with student aptitude. By expressing aptitude as time needed to learn, Carroll refreshingly broke through his era’s debate about the origins of intelligence (nature vs. nurture) and the vocabulary that labels students as having more or less intelligence. He also spoke directly to a primary challenge of teaching: how to effectively produce learning in classrooms populated by students needing vastly different amounts of time to learn the exact same content.[i] The source of that variation is largely irrelevant to the constraints placed on instructional decisions. Teachers obviously have limited control over the denominator of the ratio (they must take kids as they are) and less than one might think over the numerator. Teachers allot time to instruction only after educational authorities have decided the number of hours in the school day, the number of days in the school year, the number of minutes in class periods in middle and high schools, and the amount of time set aside for lunch, recess, passing periods, various pull-out programs, pep rallies, and the like. There are also announcements over the PA system, stray dogs that may wander into the classroom, and other unscheduled encroachments on instructional time. The model has had a profound influence on educational thought. As of July 5, 2015, Google Scholar reported 2,931 citations of Carroll’s article. Benjamin Bloom’s “mastery learning” was deeply influenced by Carroll. It is predicated on the idea that optimal learning occurs when time spent on learning—rather than content—is allowed to vary, providing to each student the individual amount of time he or she needs to learn a common curriculum. This is often referred to as “students working at their own pace,” and progress is measured by mastery of content rather than seat time. David C. Berliner’s 1990 discussion of time includes an analysis of mediating variables in the numerator of Carroll’s model, including the amount of time students are willing to spend on learning. Carroll called this persistence, and Berliner links the construct to student engagement and time on task—topics of keen interest to researchers today. Berliner notes that although both are typically described in terms of motivation, they can be measured empirically in increments of time. Most applications of Carroll’s model have been interested in what happens when insufficient time is provided for learning—in other words, when the numerator of the ratio is significantly less than the denominator. When that happens, students don’t have an adequate opportunity to learn. They need more time. As applied to Common Core and instruction, one should also be aware of problems that arise from the inefficient distribution of time. Time is a limited resource that teachers deploy in the production of learning. Below I discuss instances when the CCSS-M may lead to the numerator in Carroll’s model being significantly larger than the denominator—when teachers spend more time teaching a concept or skill than is necessary. Because time is limited and fixed, wasted time on one topic will shorten the amount of time available to teach other topics. Excessive instructional time may also negatively affect student engagement. Students who have fully learned content that continues to be taught may become bored; they must endure instruction that they do not need. Standard Algorithms and Alternative Strategies Jason Zimba, one of the lead authors of the Common Core Math standards, and Barry Garelick, a critic of the standards, had a recent, interesting exchange about when standard algorithms are called for in the CCSS-M. A standard algorithm is a series of steps designed to compute accurately and quickly. In the U.S., students are typically taught the standard algorithms of addition, subtraction, multiplication, and division with whole numbers. Most readers of this post will recognize the standard algorithm for addition. It involves lining up two or more multi-digit numbers according to place-value, with one number written over the other, and adding the columns from right to left with “carrying” (or regrouping) as needed. The standard algorithm is the only algorithm required for students to learn, although others are mentioned beginning with the first grade standards. Curiously, though, CCSS-M doesn’t require students to know the standard algorithms for addition and subtraction until fourth grade. This opens the door for a lot of wasted time. Garelick questioned the wisdom of teaching several alternative strategies for addition. He asked whether, under the Common Core, only the standard algorithm could be taught—or at least, could it be taught first. As he explains: Delaying teaching of the standard algorithm until fourth grade and relying on place value “strategies” and drawings to add numbers is thought to provide students with the conceptual understanding of adding and subtracting multi-digit numbers. What happens, instead, is that the means to help learn, explain or memorize the procedure become a procedure unto itself and students are required to use inefficient cumbersome methods for two years. This is done in the belief that the alternative approaches confer understanding, so are superior to the standard algorithm. To teach the standard algorithm first would in reformers’ minds be rote learning. Reformers believe that by having students using strategies in lieu of the standard algorithm, students are still learning “skills” (albeit inefficient and confusing ones), and these skills support understanding of the standard algorithm. Students are left with a panoply of methods (praised as a good thing because students should have more than one way to solve problems), that confuse more than enlighten. Zimba responded that the standard algorithm could, indeed, be the only method taught because it meets a crucial test: reinforcing knowledge of place value and the properties of operations. He goes on to say that other algorithms also may be taught that are consistent with the standards, but that the decision to do so is left in the hands of local educators and curriculum designers: In short, the Common Core requires the standard algorithm; additional algorithms aren’t named, and they aren’t required…Standards can’t settle every disagreement—nor should they. As this discussion of just a single slice of the math curriculum illustrates, teachers and curriculum authors following the standards still may, and still must, make an enormous range of decisions. Zimba defends delaying mastery of the standard algorithm until fourth grade, referring to it as a “culminating” standard that he would, if he were teaching, introduce in earlier grades. Zimba illustrates the curricular progression he would employ in a table, showing that he would introduce the standard algorithm for addition late in first grade (with two-digit addends) and then extend the complexity of its use and provide practice towards fluency until reaching the culminating standard in fourth grade. Zimba would introduce the subtraction algorithm in second grade and similarly ramp up its complexity until fourth grade. It is important to note that in CCSS-M the word “algorithm” appears for the first time (in plural form) in the third grade standards: 3.NBT.2 Fluently add and subtract within 1000 using strategies and algorithms based on place value, properties of operations, and/or the relationship between addition and subtraction. The term “strategies and algorithms” is curious. Zimba explains, “It is true that the word ‘algorithms’ here is plural, but that could be read as simply leaving more choice in the hands of the teacher about which algorithm(s) to teach—not as a requirement for each student to learn two or more general algorithms for each operation!” I have described before the “dog whistles” embedded in the Common Core, signals to educational progressives—in this case, math reformers—that despite these being standards, the CCSS-M will allow them great latitude. Using the plural “algorithms” in this third grade standard and not specifying the standard algorithm until fourth grade is a perfect example of such a dog whistle. Why All the Fuss about Standard Algorithms? It appears that the Common Core authors wanted to reach a political compromise on standard algorithms. Standard algorithms were a key point of contention in the “Math Wars” of the 1990s. The 1997 California Framework for Mathematics required that students know the standard algorithms for all four operations—addition, subtraction, multiplication, and division—by the end of fourth grade.[ii] The 2000 Massachusetts Mathematics Curriculum Framework called for learning the standard algorithms for addition and subtraction by the end of second grade and for multiplication and division by the end of fourth grade. These two frameworks were heavily influenced by mathematicians (from Stanford in California and Harvard in Massachusetts) and quickly became favorites of math traditionalists. In both states’ frameworks, the standard algorithm requirements were in direct opposition to the reform-oriented frameworks that preceded them—in which standard algorithms were barely mentioned and alternative algorithms or “strategies” were encouraged. Now that the CCSS-M has replaced these two frameworks, the requirement for knowing the standard algorithms in California and Massachusetts slips from third or fourth grade all the way to sixth grade. That’s what reformers get in the compromise. They are given a green light to continue teaching alternative algorithms, as long as the algorithms are consistent with teaching place value and properties of arithmetic. But the standard algorithm is the only one students are required to learn. And that exclusivity is intended to please the traditionalists. I agree with Garelick that the compromise leads to problems. In a 2013 Chalkboard post, I described a first grade math program in which parents were explicitly requested not to teach the standard algorithm for addition when helping their children at home. The students were being taught how to represent addition with drawings that clustered objects into groups of ten. The exercises were both time consuming and tedious. When the parents met with the school principal to discuss the matter, the principal told them that the math program was following the Common Core by promoting deeper learning. The parents withdrew their child from the school and enrolled him in private school. The value of standard algorithms is that they are efficient and packed with mathematics. Once students have mastered single-digit operations and the meaning of place value, the standard algorithms reveal to students that they can take procedures that they already know work well with one- and two-digit numbers, and by applying them over and over again, solve problems with large numbers. Traditionalists and reformers have different goals. Reformers believe exposure to several algorithms encourages flexible thinking and the ability to draw on multiple strategies for solving problems. Traditionalists believe that a bigger problem than students learning too few algorithms is that too few students learn even one algorithm. I have been a critic of the math reform movement since I taught in the 1980s. But some of their complaints have merit. All too often, instruction on standard algorithms has left out meaning. As Karen C. Fuson and Sybilla Beckmann point out, “an unfortunate dichotomy” emerged in math instruction: teachers taught “strategies” that implied understanding and “algorithms” that implied procedural steps that were to be memorized. Michael Battista’s research has provided many instances of students clinging to algorithms without understanding. He gives an example of a student who has not quite mastered the standard algorithm for addition and makes numerous errors on a worksheet. On one item, for example, the student forgets to carry and calculates that 19 + 6 = 15. In a post-worksheet interview, the student counts 6 units from 19 and arrives at 25. Despite the obvious discrepancy—(25 is not 15, the student agrees)—he declares that his answers on the worksheet must be correct because the algorithm he used “always works.”[iii] Math reformers rightfully argue that blind faith in procedure has no place in a thinking mathematical classroom. Who can disagree with that? Students should be able to evaluate the validity of answers, regardless of the procedures used, and propose alternative solutions. Standard algorithms are tools to help them do that, but students must be able to apply them, not in a robotic way, but with understanding. Conclusion Let’s return to Carroll’s model of time and learning. I conclude by making two points—one about curriculum and instruction, the other about implementation. In the study of numbers, a coherent K-12 math curriculum, similar to that of the previous California and Massachusetts frameworks, can be sketched in a few short sentences. Addition with whole numbers (including the standard algorithm) is taught in first grade, subtraction in second grade, multiplication in third grade, and division in fourth grade. Thus, the study of whole number arithmetic is completed by the end of fourth grade. Grades five through seven focus on rational numbers (fractions, decimals, percentages), and grades eight through twelve study advanced mathematics. Proficiency is sought along three dimensions: 1) fluency with calculations, 2) conceptual understanding, 3) ability to solve problems. Placing the CCSS-M standard for knowing the standard algorithms of addition and subtraction in fourth grade delays this progression by two years. Placing the standard for the division algorithm in sixth grade continues the two-year delay. For many fourth graders, time spent working on addition and subtraction will be wasted time. They already have a firm understanding of addition and subtraction. The same thing for many sixth graders—time devoted to the division algorithm will be wasted time that should be devoted to the study of rational numbers. The numerator in Carroll’s instructional time model will be greater than the denominator, indicating the inefficient allocation of time to instruction. As Jason Zimba points out, not everyone agrees on when the standard algorithms should be taught, the alternative algorithms that should be taught, the manner in which any algorithm should be taught, or the amount of instructional time that should be spent on computational procedures. Such decisions are made by local educators. Variation in these decisions will introduce variation in the implementation of the math standards. It is true that standards, any standards, cannot control implementation, especially the twists and turns in how they are interpreted by educators and brought to life in classroom instruction. But in this case, the standards themselves are responsible for the myriad approaches, many unproductive, that we are sure to see as schools teach various algorithms under the Common Core. [i] Tracking, ability grouping, differentiated learning, programmed learning, individualized instruction, and personalized learning (including today’s flipped classrooms) are all attempts to solve the challenge of student heterogeneity. [ii] An earlier version of this post incorrectly stated that the California framework required that students know the standard algorithms for all four operations by the end of third grade. I regret the error. [iii] Michael T. Battista (2001). “Research and Reform in Mathematics Education,” pp. 32-84 in The Great Curriculum Debate: How Should We Teach Reading and Math? (T. Loveless, ed., Brookings Instiution Press). Authors Tom Loveless Full Article
core No, the sky is not falling: Interpreting the latest SAT scores By webfeeds.brookings.edu Published On :: Thu, 01 Oct 2015 12:00:00 -0400 Earlier this month, the College Board released SAT scores for the high school graduating class of 2015. Both math and reading scores declined from 2014, continuing a steady downward trend that has been in place for the past decade. Pundits of contrasting political stripes seized on the scores to bolster their political agendas. Michael Petrilli of the Fordham Foundation argued that falling SAT scores show that high schools need more reform, presumably those his organization supports, in particular, charter schools and accountability.* For Carol Burris of the Network for Public Education, the declining scores were evidence of the failure of polices her organization opposes, namely, Common Core, No Child Left Behind, and accountability. Petrilli and Burris are both misusing SAT scores. The SAT is not designed to measure national achievement; the score losses from 2014 were miniscule; and most of the declines are probably the result of demographic changes in the SAT population. Let’s examine each of these points in greater detail. The SAT is not designed to measure national achievement It never was. The SAT was originally meant to measure a student’s aptitude for college independent of that student’s exposure to a particular curriculum. The test’s founders believed that gauging aptitude, rather than achievement, would serve the cause of fairness. A bright student from a high school in rural Nebraska or the mountains of West Virginia, they held, should have the same shot at attending elite universities as a student from an Eastern prep school, despite not having been exposed to the great literature and higher mathematics taught at prep schools. The SAT would measure reasoning and analytical skills, not the mastery of any particular body of knowledge. Its scores would level the playing field in terms of curricular exposure while providing a reasonable estimate of an individual’s probability of success in college. Note that even in this capacity, the scores never suffice alone; they are only used to make admissions decisions by colleges and universities, including such luminaries as Harvard and Stanford, in combination with a lot of other information—grade point averages, curricular resumes, essays, reference letters, extra-curricular activities—all of which constitute a student’s complete application. Today’s SAT has moved towards being a content-oriented test, but not entirely. Next year, the College Board will introduce a revised SAT to more closely reflect high school curricula. Even then, SAT scores should not be used to make judgements about U.S. high school performance, whether it’s a single high school, a state’s high schools, or all of the high schools in the country. The SAT sample is self-selected. In 2015, it only included about one-half of the nation’s high school graduates: 1.7 million out of approximately 3.3 million total. And that’s about one-ninth of approximately 16 million high school students. Generalizing SAT scores to these larger populations violates a basic rule of social science. The College Board issues a warning when it releases SAT scores: “Since the population of test takers is self-selected, using aggregate SAT scores to compare or evaluate teachers, schools, districts, states, or other educational units is not valid, and the College Board strongly discourages such uses.” TIME’s coverage of the SAT release included a statement by Andrew Ho of Harvard University, who succinctly makes the point: “I think SAT and ACT are tests with important purposes, but measuring overall national educational progress is not one of them.” The score changes from 2014 were miniscule SAT scores changed very little from 2014 to 2015. Reading scores dropped from 497 to 495. Math scores also fell two points, from 513 to 511. Both declines are equal to about 0.017 standard deviations (SD).[i] To illustrate how small these changes truly are, let’s examine a metric I have used previously in discussing test scores. The average American male is 5’10” in height with a SD of about 3 inches. A 0.017 SD change in height is equal to about 1/20 of an inch (0.051). Do you really think you’d notice a difference in the height of two men standing next to each other if they only differed by 1/20th of an inch? You wouldn’t. Similarly, the change in SAT scores from 2014 to 2015 is trivial.[ii] A more serious concern is the SAT trend over the past decade. Since 2005, reading scores are down 13 points, from 508 to 495, and math scores are down nine points, from 520 to 511. These are equivalent to declines of 0.12 SD for reading and 0.08 SD for math.[iii] Representing changes that have accumulated over a decade, these losses are still quite small. In the Washington Post, Michael Petrilli asked “why is education reform hitting a brick wall in high school?” He also stated that “you see this in all kinds of evidence.” You do not see a decline in the best evidence, the National Assessment of Educational Progress (NAEP). Contrary to the SAT, NAEP is designed to monitor national achievement. Its test scores are based on a random sampling design, meaning that the scores can be construed as representative of U.S. students. NAEP administers two different tests to high school age students, the long term trend (LTT NAEP), given to 17-year-olds, and the main NAEP, given to twelfth graders. Table 1 compares the past ten years’ change in test scores of the SAT with changes in NAEP.[iv] The long term trend NAEP was not administered in 2005 or 2015, so the closest years it was given are shown. The NAEP tests show high school students making small gains over the past decade. They do not confirm the losses on the SAT. Table 1. Comparison of changes in SAT, Main NAEP (12th grade), and LTT NAEP (17-year-olds) scores. Changes expressed as SD units of base year. SAT 2005-2015 Main NAEP 2005-2015 LTT NAEP 2004-2012 Reading -0.12* +.05* +.09* Math -0.08* +.09* +.03 *p<.05 Petrilli raised another concern related to NAEP scores by examining cohort trends in NAEP scores. The trend for the 17-year-old cohort of 2012, for example, can be constructed by using the scores of 13-year-olds in 2008 and 9-year-olds in 2004. By tracking NAEP changes over time in this manner, one can get a rough idea of a particular cohort’s achievement as students grow older and proceed through the school system. Examining three cohorts, Fordham’s analysis shows that the gains between ages 13 and 17 are about half as large as those registered between ages nine and 13. Kids gain more on NAEP when they are younger than when they are older. There is nothing new here. NAEP scholars have been aware of this phenomenon for a long time. Fordham points to particular elements of education reform that it favors—charter schools, vouchers, and accountability—as the probable cause. It is true that those reforms more likely target elementary and middle schools than high schools. But the research literature on age discrepancies in NAEP gains (which is not cited in the Fordham analysis) renders doubtful the thesis that education policies are responsible for the phenomenon.[v] Whether high school age students try as hard as they could on NAEP has been pointed to as one explanation. A 1996 analysis of NAEP answer sheets found that 25-to-30 percent of twelfth graders displayed off-task test behaviors—doodling, leaving items blank—compared to 13 percent of eighth graders and six percent of fourth graders. A 2004 national commission on the twelfth grade NAEP recommended incentives (scholarships, certificates, letters of recognition from the President) to boost high school students’ motivation to do well on NAEP. Why would high school seniors or juniors take NAEP seriously when this low stakes test is taken in the midst of taking SAT or ACT tests for college admission, end of course exams that affect high school GPA, AP tests that can affect placement in college courses, state accountability tests that can lead to their schools being deemed a success or failure, and high school exit exams that must be passed to graduate?[vi] Other possible explanations for the phenomenon are: 1) differences in the scales between the ages tested on LTT NAEP (in other words, a one-point gain on the scale between ages nine and 13 may not represent the same amount of learning as a one-point gain between ages 13 and 17); 2) different rates of participation in NAEP among elementary, middle, and high schools;[vii] and 3) social trends that affect all high school students, not just those in public schools. The third possibility can be explored by analyzing trends for students attending private schools. If Fordham had disaggregated the NAEP data by public and private schools (the scores of Catholic school students are available), it would have found that the pattern among private school students is similar—younger students gain more than older students on NAEP. That similarity casts doubt on the notion that policies governing public schools are responsible for the smaller gains among older students.[viii] Changes in the SAT population Writing in the Washington Post, Carol Burris addresses the question of whether demographic changes have influenced the decline in SAT scores. She concludes that they have not, and in particular, she concludes that the growing proportion of students receiving exam fee waivers has probably not affected scores. She bases that conclusion on an analysis of SAT participation disaggregated by level of family income. Burris notes that the percentage of SAT takers has been stable across income groups in recent years. That criterion is not trustworthy. About 39 percent of students in 2015 declined to provide information on family income. The 61 percent that answered the family income question are probably skewed against low-income students who are on fee waivers (the assumption being that they may feel uncomfortable answering a question about family income).[ix] Don’t forget that the SAT population as a whole is a self-selected sample. A self-selected subsample from a self-selected sample tells us even less than the original sample, which told us almost nothing. The fee waiver share of SAT takers increased from 21 percent in 2011 to 25 percent in 2015. The simple fact that fee waivers serve low-income families, whose children tend to be lower-scoring SAT takers, is important, but not the whole story here. Students from disadvantaged families have always taken the SAT. But they paid for it themselves. If an additional increment of disadvantaged families take the SAT because they don’t have to pay for it, it is important to consider whether the new entrants to the pool of SAT test takers possess unmeasured characteristics that correlate with achievement—beyond the effect already attributed to socioeconomic status. Robert Kelchen, an assistant professor of higher education at Seton Hall University, calculated the effect on national SAT scores of just three jurisdictions (Washington, DC, Delaware, and Idaho) adopting policies of mandatory SAT testing paid for by the state. He estimated that these policies explain about 21 percent of the nationwide decline in test scores between 2011 and 2015. He also notes that a more thorough analysis, incorporating fee waivers of other states and districts, would surely boost that figure. Fee waivers in two dozen Texas school districts, for example, are granted to all juniors and seniors in high school. And all students in those districts (including Dallas and Fort Worth) are required to take the SAT beginning in the junior year. Such universal testing policies can increase access and serve the cause of equity, but they will also, at least for a while, lead to a decline in SAT scores. Here, I offer my own back of the envelope calculation of the relationship of demographic changes with SAT scores. The College Board reports test scores and participation rates for nine racial and ethnic groups.[x] These data are preferable to family income because a) almost all students answer the race/ethnicity question (only four percent are non-responses versus 39 percent for family income), and b) it seems a safe assumption that students are more likely to know their race or ethnicity compared to their family’s income. The question tackled in Table 2 is this: how much would the national SAT scores have changed from 2005 to 2015 if the scores of each racial/ethnic group stayed exactly the same as in 2005, but each group’s proportion of the total population were allowed to vary? In other words, the scores are fixed at the 2005 level for each group—no change. The SAT national scores are then recalculated using the 2015 proportions that each group represented in the national population. Table 2. SAT Scores and Demographic Changes in the SAT Population (2005-2015) Projected Change Based on Change in Proportions Actual Change Projected Change as Percentage of Actual Change Reading -9 -13 69% Math -7 -9 78% The data suggest that two-thirds to three-quarters of the SAT score decline from 2005 to 2015 is associated with demographic changes in the test-taking population. The analysis is admittedly crude. The relationships are correlational, not causal. The race/ethnicity categories are surely serving as proxies for a bundle of other characteristics affecting SAT scores, some unobserved and others (e.g., family income, parental education, language status, class rank) that are included in the SAT questionnaire but produce data difficult to interpret. Conclusion Using an annual decline in SAT scores to indict high schools is bogus. The SAT should not be used to measure national achievement. SAT changes from 2014-2015 are tiny. The downward trend over the past decade represents a larger decline in SAT scores, but one that is still small in magnitude and correlated with changes in the SAT test-taking population. In contrast to SAT scores, NAEP scores, which are designed to monitor national achievement, report slight gains for 17-year-olds over the past ten years. It is true that LTT NAEP gains are larger among students from ages nine to 13 than from ages 13 to 17, but research has uncovered several plausible explanations for why that occurs. The public should exercise great caution in accepting the findings of test score analyses. Test scores are often misinterpreted to promote political agendas, and much of the alarmist rhetoric provoked by small declines in scores is unjustified. * In fairness to Petrilli, he acknowledges in his post, “The SATs aren’t even the best gauge—not all students take them, and those who do are hardly representative.” [i] The 2014 SD for both SAT reading and math was 115. [ii] A substantively trivial change may nevertheless reach statistical significance with large samples. [iii] The 2005 SDs were 113 for reading and 115 for math. [iv] Throughout this post, SAT’s Critical Reading (formerly, the SAT-Verbal section) is referred to as “reading.” I only examine SAT reading and math scores to allow for comparisons to NAEP. Moreover, SAT’s writing section will be dropped in 2016. [v] The larger gains by younger vs. older students on NAEP is explored in greater detail in the 2006 Brown Center Report, pp. 10-11. [vi] If these influences have remained stable over time, they would not affect trends in NAEP. It is hard to believe, however, that high stakes tests carry the same importance today to high school students as they did in the past. [vii] The 2004 blue ribbon commission report on the twelfth grade NAEP reported that by 2002 participation rates had fallen to 55 percent. That compares to 76 percent at eighth grade and 80 percent at fourth grade. Participation rates refer to the originally drawn sample, before replacements are made. NAEP is conducted with two stage sampling—schools first, then students within schools—meaning that the low participation rate is a product of both depressed school (82 percent) and student (77 percent) participation. See page 8 of: http://www.nagb.org/content/nagb/assets/documents/publications/12_gr_commission_rpt.pdf [viii] Private school data are spotty on the LTT NAEP because of problems meeting reporting standards, but analyses identical to Fordham’s can be conducted on Catholic school students for the 2008 and 2012 cohorts of 17-year-olds. [ix] The non-response rate in 2005 was 33 percent. [x] The nine response categories are: American Indian or Alaska Native; Asian, Asian American, or Pacific Islander; Black or African American; Mexican or Mexican American; Puerto Rican; Other Hispanic, Latino, or Latin American; White; Other; and No Response. Authors Tom Loveless Full Article
core Has Common Core influenced instruction? By webfeeds.brookings.edu Published On :: Tue, 24 Nov 2015 07:30:00 -0500 The release of 2015 NAEP scores showed national achievement stalling out or falling in reading and mathematics. The poor results triggered speculation about the effect of Common Core State Standards (CCSS), the controversial set of standards adopted by more than 40 states since 2010. Critics of Common Core tended to blame the standards for the disappointing scores. Its defenders said it was too early to assess CCSS’s impact and that implementation would take many years to unfold. William J. Bushaw, executive director of the National assessment Governing Board, cited “curricular uncertainty” as the culprit. Secretary of Education Arne Duncan argued that new standards typically experience an “implementation dip” in the early days of teachers actually trying to implement them in classrooms. In the rush to argue whether CCSS has positively or negatively affected American education, these speculations are vague as to how the standards boosted or depressed learning. They don’t provide a description of the mechanisms, the connective tissue, linking standards to learning. Bushaw and Duncan come the closest, arguing that the newness of CCSS has created curriculum confusion, but the explanation falls flat for a couple of reasons. Curriculum in the three states that adopted the standards, rescinded them, then adopted something else should be extremely confused. But the 2013-2015 NAEP changes for Indiana, Oklahoma, and South Carolina were a little bit better than the national figures, not worse.[i] In addition, surveys of math teachers conducted in the first year or two after the standards were adopted found that: a) most teachers liked them, and b) most teachers said they were already teaching in a manner consistent with CCSS.[ii] They didn’t mention uncertainty. Recent polls, however, show those positive sentiments eroding. Mr. Bushaw might be mistaking disenchantment for uncertainty.[iii] For teachers, the novelty of CCSS should be dissipating. Common Core’s advocates placed great faith in professional development to implement the standards. Well, there’s been a lot of it. Over the past few years, millions of teacher-hours have been devoted to CCSS training. Whether all that activity had a lasting impact is questionable. Randomized control trials have been conducted of two large-scale professional development programs. Interestingly, although they pre-date CCSS, both programs attempted to promote the kind of “instructional shifts” championed by CCSS advocates. The studies found that if teacher behaviors change from such training—and that’s not a certainty—the changes fade after a year or two. Indeed, that’s a pattern evident in many studies of educational change: a pop at the beginning, followed by fade out. My own work analyzing NAEP scores in 2011 and 2013 led me to conclude that the early implementation of CCSS was producing small, positive changes in NAEP.[iv] I warned that those gains “may be as good as it gets” for CCSS.[v] Advocates of the standards hope that CCSS will eventually produce long term positive effects as educators learn how to use them. That’s a reasonable hypothesis. But it should now be apparent that a counter-hypothesis has equal standing: any positive effect of adopting Common Core may have already occurred. To be precise, the proposition is this: any effects from adopting new standards and attempting to change curriculum and instruction to conform to those standards occur early and are small in magnitude. Policymakers still have a couple of arrows left in the implementation quiver, accountability being the most powerful. Accountability systems have essentially been put on hold as NCLB sputtered to an end and new CCSS tests appeared on the scene. So the CCSS story isn’t over. Both hypotheses remain plausible. Reading Instruction in 4th and 8th Grades Back to the mechanisms, the connective tissue binding standards to classrooms. The 2015 Brown Center Report introduced one possible classroom effect that is showing up in NAEP data: the relative emphasis teachers place on fiction and nonfiction in reading instruction. The ink was still drying on new Common Core textbooks when a heated debate broke out about CCSS’s recommendation that informational reading should receive greater attention in classrooms.[vi] Fiction has long dominated reading instruction. That dominance appears to be waning. After 2011, something seems to have happened. I am more persuaded that Common Core influenced the recent shift towards nonfiction than I am that Common Core has significantly affected student achievement—for either good or ill. But causality is difficult to confirm or to reject with NAEP data, and trustworthy efforts to do so require a more sophisticated analysis than presented here. Four lessons from previous education reforms Nevertheless, the figures above reinforce important lessons that have been learned from previous top-down reforms. Let’s conclude with four: 1. There seems to be evidence that CCSS is having an impact on the content of reading instruction, moving from the dominance of fiction over nonfiction to near parity in emphasis. Unfortunately, as Mark Bauerlein and Sandra Stotsky have pointed out, there is scant evidence that such a shift improves children’s reading.[vii] 2. Reading more nonfiction does not necessarily mean that students will be reading higher quality texts, even if the materials are aligned with CCSS. The Core Knowledge Foundation and the Partnership for 21st Century Learning, both supporters of Common Core, have very different ideas on the texts schools should use with the CCSS.[viii] The two organizations advocate for curricula having almost nothing in common. 3. When it comes to the study of implementing education reforms, analysts tend to focus on the formal channels of implementation and the standard tools of public administration—for example, intergovernmental hand-offs (federal to state to district to school), alignment of curriculum, assessment and other components of the reform, professional development, getting incentives right, and accountability mechanisms. Analysts often ignore informal channels, and some of those avenues funnel directly into schools and classrooms.[ix] Politics and the media are often overlooked. Principals and teachers are aware of the politics swirling around K-12 school reform. Many educators undoubtedly formed their own opinions on CCSS and the fiction vs. nonfiction debate before the standard managerial efforts touched them. 4. Local educators whose jobs are related to curriculum almost certainly have ideas about what constitutes good curriculum. It’s part of the profession. Major top-down reforms such as CCSS provide local proponents with political cover to pursue curricular and instructional changes that may be politically unpopular in the local jurisdiction. Anyone who believes nonfiction should have a more prominent role in the K-12 curriculum was handed a lever for promoting his or her beliefs by CCSS. I’ve previously called these the “dog whistles” of top-down curriculum reform, subtle signals that give local advocates license to promote unpopular positions on controversial issues. [i] In the four subject-grade combinations assessed by NAEP (reading and math at 4th and 8th grades), IN, SC, and OK all exceeded national gains on at least three out of four tests from 2013-2015. NAEP data can be analyzed using the NAEP Data Explorer: http://nces.ed.gov/nationsreportcard/naepdata/. [ii] In a Michigan State survey of teachers conducted in 2011, 77 percent of teachers, after being presented with selected CCSS standards for their grade, thought they were the same as their state’s former standards. http://education.msu.edu/epc/publications/documents/WP33ImplementingtheCommonCoreStandardsforMathematicsWhatWeknowaboutTeacherofMathematicsin41S.pdf [iii] In the Education Next surveys, 76 percent of teachers supported Common Core in 2013 and 12 percent opposed. In 2015, 40 percent supported and 50 percent opposed. http://educationnext.org/2015-ednext-poll-school-reform-opt-out-common-core-unions. [iv] I used variation in state implementation of CCSS to assign the states to three groups and analyzed differences of the groups’ NAEP gains [v] http://www.brookings.edu/~/media/research/files/reports/2015/03/bcr/2015-brown-center-report_final.pdf [vi] http://www.edweek.org/ew/articles/2012/11/14/12cc-nonfiction.h32.html?qs=common+core+fiction [vii] Mark Bauerlein and Sandra Stotsky (2012). “How Common Core’s ELA Standards Place College Readiness at Risk.” A Pioneer Institute White Paper. [viii] Compare the P21 Common Core Toolkit (http://www.p21.org/our-work/resources/for-educators/1005-p21-common-core-toolkit) with Core Knowledge ELA Sequence (http://www.coreknowledge.org/ccss). It is hard to believe that they are talking about the same standards in references to CCSS. [ix] I elaborate on this point in Chapter 8, “The Fate of Reform,” in The Tracking Wars: State Reform Meets School Policy (Brookings Institution Press, 1999). Authors Tom Loveless Image Source: © Patrick Fallon / Reuters Full Article
core Reading and math in the Common Core era By webfeeds.brookings.edu Published On :: Thu, 24 Mar 2016 00:00:00 -0400 Full Article
core Brookings Live: Reading and math in the Common Core era By webfeeds.brookings.edu Published On :: Mon, 28 Mar 2016 16:00:00 -0400 Event Information March 28, 20164:00 PM - 4:30 PM EDTOnline OnlyLive Webcast And more from the Brown Center Report on American Education The Common Core State Standards have been adopted as the reading and math standards in more than forty states, but are the frontline implementers—teachers and principals—enacting them? As part of the 2016 Brown Center Report on American Education, Tom Loveless examines the degree to which CCSS recommendations have penetrated schools and classrooms. He specifically looks at the impact the standards have had on the emphasis of non-fiction vs. fiction texts in reading, and on enrollment in advanced courses in mathematics. On March 28, the Brown Center hosted an online discussion of Loveless's findings, moderated by the Urban Institute's Matthew Chingos. In addition to the Common Core, Loveless and Chingos also discussed the other sections of the three-part Brown Center Report, including a study of the relationship between ability group tracking in eighth grade and AP performance in high school. Watch the archived video below. Spreecast is the social video platform that connects people. Check out Reading and Math in the Common Core Era on Spreecast. Full Article
core Common Core’s major political challenges for the remainder of 2016 By webfeeds.brookings.edu Published On :: Wed, 30 Mar 2016 07:00:00 -0400 The 2016 Brown Center Report (BCR), which was published last week, presented a study of Common Core State Standards (CCSS). In this post, I’d like to elaborate on a topic touched upon but deserving further attention: what to expect in Common Core’s immediate political future. I discuss four key challenges that CCSS will face between now and the end of the year. Let’s set the stage for the discussion. The BCR study produced two major findings. First, several changes that CCSS promotes in curriculum and instruction appear to be taking place at the school level. Second, states that adopted CCSS and have been implementing the standards have registered about the same gains and losses on NAEP as states that either adopted and rescinded CCSS or never adopted CCSS in the first place. These are merely associations and cannot be interpreted as saying anything about CCSS’s causal impact. Politically, that doesn’t really matter. The big story is that NAEP scores have been flat for six years, an unprecedented stagnation in national achievement that states have experienced regardless of their stance on CCSS. Yes, it’s unfair, but CCSS is paying a political price for those disappointing NAEP scores. No clear NAEP differences have emerged between CCSS adopters and non-adopters to reverse that political dynamic. "Yes, it’s unfair, but CCSS is paying a political price for those disappointing NAEP scores. No clear NAEP differences have emerged between CCSS adopters and non-adopters to reverse that political dynamic." TIMSS and PISA scores in November-December NAEP has two separate test programs. The scores released in 2015 were for the main NAEP, which began in 1990. The long term trend (LTT) NAEP, a different test that was first given in 1969, has not been administered since 2012. It was scheduled to be given in 2016, but was cancelled due to budgetary constraints. It was next scheduled for 2020, but last fall officials cancelled that round of testing as well, meaning that the LTT NAEP won’t be given again until 2024. With the LTT NAEP on hold, only two international assessments will soon offer estimates of U.S. achievement that, like the two NAEP tests, are based on scientific sampling: PISA and TIMSS. Both tests were administered in 2015, and the new scores will be released around the Thanksgiving-Christmas period of 2016. If PISA and TIMSS confirm the stagnant trend in U.S. achievement, expect CCSS to take another political hit. America’s performance on international tests engenders a lot of hand wringing anyway, so the reaction to disappointing PISA or TIMSS scores may be even more pronounced than what the disappointing NAEP scores generated. Is teacher support still declining? Watch Education Next’s survey on Common Core (usually released in August/September) and pay close attention to teacher support for CCSS. The trend line has been heading steadily south. In 2013, 76 percent of teachers said they supported CCSS and only 12 percent were opposed. In 2014, teacher support fell to 43 percent and opposition grew to 37 percent. In 2015, opponents outnumbered supporters for the first time, 50 percent to 37 percent. Further erosion of teacher support will indicate that Common Core’s implementation is in trouble at the ground level. Don’t forget: teachers are the final implementers of standards. An effort by Common Core supporters to change NAEP The 2015 NAEP math scores were disappointing. Watch for an attempt by Common Core supporters to change the NAEP math tests. Michael Cohen, President of Achieve, a prominent pro-CCSS organization, released a statement about the 2015 NAEP scores that included the following: "The National Assessment Governing Board, which oversees NAEP, should carefully review its frameworks and assessments in order to ensure that NAEP is in step with the leadership of the states. It appears that there is a mismatch between NAEP and all states' math standards, no matter if they are common standards or not.” Reviewing and potentially revising the NAEP math framework is long overdue. The last adoption was in 2004. The argument for changing NAEP to place greater emphasis on number and operations, revisions that would bring NAEP into closer alignment with Common Core, also has merit. I have a longstanding position on the NAEP math framework. In 2001, I urged the National Assessment Governing Board (NAGB) to reject the draft 2004 framework because it was weak on numbers and operations—and especially weak on assessing student proficiency with whole numbers, fractions, decimals, and percentages. Common Core’s math standards are right in line with my 2001 complaint. Despite my sympathy for Common Core advocates’ position, a change in NAEP should not be made because of Common Core. In that 2001 testimony, I urged NAGB to end the marriage of NAEP with the 1989 standards of the National Council of Teachers of Mathematics, the math reform document that had guided the main NAEP since its inception. Reform movements come and go, I argued. NAGB’s job is to keep NAEP rigorously neutral. The assessment’s integrity depends upon it. NAEP was originally intended to function as a measuring stick, not as a PR device for one reform or another. If NAEP is changed it must be done very carefully and should be rooted in the mathematics children must learn. The political consequences of it appearing that powerful groups in Washington, DC are changing “The Nation’s Report Card” in order for Common Core to look better will hurt both Common Core and NAEP. Will Opt Out grow? Watch the Opt Out movement. In 2015, several organized groups of parents refused to allow their children to take Common Core tests. In New York state alone, about 60,000 opted out in 2014, skyrocketing to 200,000 in 2015. Common Core testing for 2016 begins now and goes through May. It will be important to see whether Opt Out can expand to other states, grow in numbers, and branch out beyond middle- and upper-income neighborhoods. Conclusion Common Core is now several years into implementation. Supporters have had a difficult time persuading skeptics that any positive results have occurred. The best evidence has been mixed on that question. CCSS advocates say it is too early to tell, and we’ll just have to wait to see the benefits. That defense won’t work much longer. Time is running out. The political challenges that Common Core faces the remainder of this year may determine whether it survives. Authors Tom Loveless Image Source: Jim Young / Reuters Full Article
core Obama scores a triple in Havana By webfeeds.brookings.edu Published On :: Wed, 23 Mar 2016 11:45:00 -0400 Editors' Note: Brookings Nonresident Senior Fellow Richard Feinberg reports from Havana on President Obama's historic visit to the island. Walking the streets of Havana during Obama’s two full-day visit here, the face of every Cuban I spoke with lit up brightly upon the mere mention of Obama’s name. “Brilliant,” “well-spoken,” “well-prepared,” “humanitarian,” “a true friend of Cuba,” were common refrains. These Cubans did not need to add that their own aging, distant leaders compare unfavorably to the elegant, accessible Obama. And the U.S. president’s mixed ethnicity is a powerful visual that does not need to be verbally underscored to a multi-racial Cuban population. But this skeptical question remained: “Would the visit make a lasting difference?” Would the government of Cuba permit some of the changes that Obama was so forcefully advocating? In his joint press conference with President Raúl Castro, and in his speech in a concert hall that was televised live to an intensely interested Cuban public, Obama spoke with remarkable directness about human rights and democratic freedoms, sparking more than one overhead conversation among Cubans about their own lack thereof. With eloquent dexterity, Obama delivered his subversive message carefully wrapped in assurances about his respect for Cuba’s national sovereignty. “Cubans will make their own destiny,” he reassured a proudly nationalist audience. “The President of the world”—as average Cubans are wont to refer to the U.S. president—emphasized that just as the United States no longer perceives Cuba as a threat, neither should Cuba fear the United States. Offering an outstretched hand, Obama sought to deprive the Cuban authorities of the external threat that they have used so effectively to justify their authoritarian rule and to excuse their poor economic performance. On Cuban state television, commentators were clearly thrown on the defensive, seeking to return the conversation to the remaining economic sanctions—“the blockade”—to the U.S. occupation of the Guantanamo Naval Base and to past U.S. aggressions. Their national security paradigm requires such an external imminent danger. Obama sought to strengthen the favorable trends on the island, by meeting with independent civil society leaders and young private entrepreneurs. One owner of an event planning business confided to me, “I cried during our meeting with Obama—and I rarely cry—because here was the leader of the most powerful nation on earth meeting with us, and listening to us with sophisticated understanding, when our own leaders never ever do.” Obama assured the Cubans he would continue to ask Congress to lift the remaining economic sanctions—but he added that the Cuban government could help. It could allow U.S. firms to trade with the Cuban private sector and cooperatives, and now with some state-owned enterprises “if such exchanges would benefit the Cuban people.” So far, the government has permitted very few such transactions—ironically, an auto-embargo. And the Cuban government could engage the United States in an effective human rights dialogue and prioritize settlement of outstanding claims. Certainly, the administration needs Cuba’s help in broadening constituencies in the United States for its policy of positive engagement with Cuba. Some U.S. firms—Verizon; AT&T; AirBnB; now Starwood Hotels and Resorts; shortly, various U.S. commercial airlines and ferry services—are signing deals. And the surge of U.S. travelers visiting the island typically return home as advocates for deepening normalization. Obama’s entourage included nearly 40 members of the Congress, the largest of his presidency, he said. But Obama still does not have the votes to lift the embargo. He told the Cubans he has “aggressively” used executive authority to carve out exceptions to the embargo, such that the list of things he can do administratively is growing shorter. In effect, he tossed the ball into the Cuban government’s court. Only if Cuba opens to U.S. commerce, only if it shows a disposition to improve its human rights practices, might the U.S. Congress be moved to fully normalize economic relations. If Cubans were so impressed by Obama, why do I only reward him a triple? Fundamentally, because his White House staff failed to secure a schedule that would have exposed him more directly to the welcoming Cuban people. There were rumors he was to throw out the first pitch at an exhibition game between the Tampa Bay Rays and the Cuban national team (won 4-1 by the U.S. squad), but that opportunity was denied. Nor was he permitted to make his main speech before an outdoor Cuban public. As he walked around Havana’s colonial center, the authorities allowed only small crowds. Michelle and accompanying daughters, Malia and Sasha—potentially powerful symbols in a family-oriented country—kept subdued schedules. Overall, the Cubans managed to hem Obama in, and to hand-select most of the audiences from among their loyal followers, audiences that were predictably polite but restrained. Fortunately, the meeting with opposition activists went forward as planned. Further, while Obama’s remarks were well received, his texts were not as well woven together by coherent narratives as they might have been. And many Cubans would have liked to hear more about specific measures to build a more prosperous economy. When asked whether Castro and Obama had “chemistry” by a reporter, a senior Cuban diplomat preferred to refer to “mutual respect.” But the two leaders did seem to develop a real rapport. During the baseball game, they spent a full hour sitting next to each other, seemingly in relaxed conversation. And during a brief question-and-answer period at the end of their joint press conference, when a U.S. reporter peppered Castro with hostile questions, Obama jumped in to fill time while Castro—not at all accustomed to press conferences—struggled to compose his response. Cubans will long remember this visit by the sort of charismatic leader that they once had, in a youthful Fidel Castro, and that they would long to find once again. In the meantime, the Obama administration will do what it can to reintroduce Cuba to U.S. goods and services, U.S. citizen-diplomats, musical concerts, sports stars—Shaquille O’Neal, among others—and other cultural, educational, and scientific exchanges. And it will also spread ideas, about how to improve the sluggish Cuban economy and gradually integrate it into global commerce, and in the longer run, to help give average Cubans a greater voice in determining their own national destiny. Authors Richard E. Feinberg Image Source: © Jonathan Ernst / Reuters Full Article
core Trans-Atlantic Scorecard – July 2019 By webfeeds.brookings.edu Published On :: Thu, 18 Jul 2019 13:30:26 +0000 Welcome to the fourth edition of the Trans-Atlantic Scorecard, a quarterly evaluation of U.S.-European relations produced by Brookings’s Center on the United States and Europe (CUSE), as part of the Brookings – Robert Bosch Foundation Transatlantic Initiative. To produce the Scorecard, we poll Brookings scholars and other experts on the present state of U.S. relations… Full Article
core Trans-Atlantic Scorecard – October 2019 By webfeeds.brookings.edu Published On :: Wed, 23 Oct 2019 14:38:07 +0000 Welcome to the fifth edition of the Trans-Atlantic Scorecard, a quarterly evaluation of U.S.-European relations produced by Brookings’s Center on the United States and Europe (CUSE), as part of the Brookings – Robert Bosch Foundation Transatlantic Initiative. To produce the Scorecard, we poll Brookings scholars and other experts on the present state of U.S. relations… Full Article
core Share your idea for how big data can help the environment and score a trip to the Eye on Earth Summit in Abu Dhabi By www.treehugger.com Published On :: Tue, 21 Jul 2015 11:00:06 -0400 The Eye on Earth Summit aims to harness the power of data and new data gathering technologies to help the environment and support sustainable development. Full Article Uncategorized