bl

Lessons of history, law, and public opinion for AI development

Artificial intelligence is not the first technology to concern consumers. Over time, many innovations have frightened users and led to calls for major regulation or restrictions. Inventions such as the telegraph, television, and robots have generated everything from skepticism to outright fear. As AI technology advances, how should we evaluate AI? What measures should be…

       




bl

The China debate: Are US and Chinese long-term interests fundamentally incompatible?

The first two years of Donald Trump’s presidency have coincided with an intensification in competition between the United States and China. Across nearly every facet of the relationship—trade, investment, technological innovation, military dialogue, academic exchange, relations with Taiwan, the South China Sea—tensions have risen and cooperation has waned. To some observers, the more competitive nature…

      
 
 




bl

Statement of Martin Neil Baily to the public hearing concerning the Department of Labor’s proposed conflict of interest rule


Introduction

I would like to thank the Department for giving me the opportunity to testify on this important issue. The document I submitted to you is more general than most of the comments you have received, talking about the issues facing retirement savers and policymakers, rather than engaging in a point-by-point discussion of the detailed DOL proposal1.

Issues around Retirement Saving

1. Most workers in the bottom third of the income distribution will rely on Social Security to support them in retirement and will save little. Hence it is vital that we support Social Security in roughly its present form and make sure it remains funded, either by raising revenues or by scaling back benefits for higher income retirees, or both.

2. Those in the middle and upper middle income levels must now rely on 401k and IRA funds to provide income support in retirement. Many and perhaps most households lack a good understanding of the amount they need to save and how to allocate their savings. This is true even of many savers with high levels of education and capabilities.

3. The most important mistakes made are: not saving enough; withdrawing savings prior to retirement; taking Social Security benefits too early2 ; not managing tax liabilities effectively; and failing to adequately manage risk in investment choices. This last category includes those who are too risk averse and choose low-return investments as well as those that overestimate their own ability to pick stocks and time market movements. These points are discussed in the paper I submitted to DoL in July. They indicate that retirement savers can benefit substantially from good advice.

4. The market for investment advice is one where there is asymmetric information and such markets are prone to inefficiency. It is very hard to get incentives correctly aligned. Professional standards are often used as a way of dealing with such markets but these are only partially successful. Advisers may be compensated through fees paid by the investment funds they recommend, either a load fee or a wrap fee. This arrangement can create an incentive for advisers to recommend high fee plans.

5. At the same time, advisers who encourage increased saving, help savers select products with good returns and adequate diversification, and follow a strategy of holding assets until retirement provide benefits to their clients.

Implications for the DoL’s proposed conflicted interest rule

1. Disclosure. There should be a standardized and simple disclosure form provided to all households receiving investment advice, detailing the fees they will be paying based on the choices they make. Different investment choices offered to clients should be accompanied by a statement describing how the fees received by the adviser would be impacted by the alternative recommendations made to the client.

2. Implications for small-scale savers. The proposed rule will bring with it increased compliance costs. These costs, combined with a reluctance to assume more risk and a fear of litigation, may make some advisers less likely to offer retirement advice to households with modest savings. These households are the ones most in need of direction and education, but because their accounts will not turn profits for advisors, they may be abandoned. According to the Employee Benefits Security Administration (EBSA), the proposed rule will save families with IRAs more than $40 billion over the next decade. However, this benefit must be weighed against the attendant costs of implementing the rule. It is possible that the rule will leave low- and medium-income households without professional guidance, further widening the retirement savings gap. The DoL should consider ways to minimize or manage these costs. Options include incentivizing advisors to continue guiding small-scale savers, perhaps through the tax code, and promoting increased financial literacy training for households with modest savings. Streamlining and simplifying the rules would also help.

3. Need for Research on Online Solutions. The Administration has argued that online advice may be the solution for these savers, and for some fraction of this group that may be a good alternative. Relying on online sites to solve the problem seems a stretch, however. Maybe at some time in the future that will be a viable option but at present there are many people, especially in the older generation, who lack sufficient knowledge and experience to rely on web solutions. The web offers dangers as well as solutions, with the potential for sub-optimal or fraudulent advice. I urge the DoL to commission independent research to determine how well a typical saver does when looking for investment advice online. Do they receive good advice? Do they act on that advice? What classes of savers do well or badly with online advice? Can web advice be made safer? To what extent do persons receiving online advice avoid the mistakes described earlier?

4. Pitfalls of MyRA. Another suggestion by the Administration is that small savers use MyRA as a guide to their decisions and this option is low cost and safe, but the returns are very low and will not provide much of a cushion in retirement unless households set aside a much larger share of their income than has been the case historically.

5. Clarifications about education versus advice. The proposed rule distinguished education from advisement. An advisor can share general information on best practices in retirement planning, including making age-appropriate asset allocations and determining the ideal age at which to retire, without triggering fiduciary responsibility. This is certainly a useful distinction. However, some advisors could frame this general information in a way that encourages clients to make decisions that are not in their own best interest. The DoL ought to think carefully about the line between education and advice, and how to discourage advisors from sharing information in a way that leads to future conflicts of interest. One option may be standardizing the general information that may be provided without triggering fiduciary responsibility.

6. Implications for risk management. Under the proposed rule advisors may be reluctant to assume additional risk and worry about litigation. In addition to pushing small-scale savers out of the market, the rule may encourage excessive risk aversion in some advisors. General wisdom suggests that young savers should have relatively high-risk portfolios, de-risking as they age, and ending with a relatively low-risk portfolio at the end of the accumulation period. The proposed rule could cause advisors to discourage clients from taking on risk, even when the risk is generally appropriate and the investor has healthy expectations. Extreme risk aversion could decrease both market returns for investors and the “value-add” of professional advisors. The DoL should think carefully about how it can discourage conflicted advice without encouraging overzealous risk reductions.

The proposed rule is an important effort to increase consumer protection and retirement security. However, in its current form, it may open the door to some undesirable or problematic outcomes. With some thoughtful revisions, I believe the rule can provide a net benefit to the country.



1. Baily’s work has been assisted by Sarah E. Holmes. He is a Senior Fellow at the Brookings Institution and a Director of The Phoenix Companies, but the views expressed are his alone.

2. As you know, postponing Social Security benefits yields an 8 percent real rate of return, far higher than most people earn on their investments. For most of those that can manage to do so, postponing the receipt of benefits is the best decision.

Downloads

Publication: Public Hearing - Department of Labor’s Proposed Conflict of Interest Rule
Image Source: © Steve Nesius / Reuters
     
 
 




bl

The Republican Senate just rebuked Trump using the War Powers Act — for the third time. That’s remarkable.

       




bl

Russia is a terrible ally against terrorism

       




bl

Crisis in Eastern Europe: Manageable – But Needs to Be Managed

The leaders of Europe will meet this weekend to respond to the rapid deterioration of the economic situation in Emerging Europe. The situation varies a great deal; some countries have been more prudent in their policies than others. But all are joined, more or less strongly, through the deeply integrated European banking system. Western banks…

       




bl

Empowering young people to end Chicago’s gun violence problem

Former U.S. Secretary of Education Arne Duncan sits down with young men from Chicago CRED (Creating Real Economic Diversity) to discuss the steps they have taken to disrupt the cycle of gun violence in their community and transition into the legal economy. http://directory.libsyn.com/episode/index/id/6400344 Also in this episode, meet David M. Rubenstein Fellow Randall Akee in…

       




bl

Ways to mitigate artificial intelligence problems

The world is experiencing extraordinary advances in artificial intelligence, with applications being deployed in finance, health care, education, e-commerce, criminal justice, and national defense, among other areas. As AI technology advances across industries and into everyday use around the world, important questions must be addressed regarding transparency, fairness, privacy, ethics, and human safety. What are…

       




bl

Overcoming barriers: Sustainable development, productive cities, and structural transformation in Africa

Against a background of protracted decline in global commodity prices and renewed focus on the Africa rising narrative, Africa is proving resilient, underpinned by strong economic performance in non-commodity exporting countries. The rise of African cities contains the potential for new engines for the continent’s structural transformation, if harnessed properly. However, the susceptibility of Africa’s…

      
 
 




bl

The Dispensable Nation: American Foreign Policy in Retreat

Vali Nasr delivers a sharp indictment of America’s flawed foreign policy and outlines a new relationship with the Muslim world and with new players in the changing Middle East.

      
 
 




bl

Is The United States A ‘Dispensable Nation’?

In an interview with NPR's Steve InskeepVali Nasr looks at how the U.S. has reduced its footprint in the world, and how China is primed to fill the void, especially in the Middle East.

      
 
 




bl

COVID-19 has taught us the internet is critical and needs public interest oversight

The COVID-19 pandemic has graphically illustrated the importance of digital networks and service platforms. Imagine the shelter-in-place reality we would have experienced at the beginning of the 21st century, only two decades ago: a slow internet and (because of that) nothing like Zoom or Netflix. Digital networks that deliver the internet to our homes, and…

       




bl

The Competitive Problem of Voter Turnout

On November 7, millions of Americans will exercise their civic duty to vote. At stake will be control of the House and Senate, not to mention the success of individual candidates running for office. President Bush's "stay the course" agenda will either be enabled over the next two years by a Republican Congress or knocked off kilter by a Democratic one.

With so much at stake, it is not surprising that the Pew Research Center found that 51 percent of registered voters have given a lot of thought to this November's election. This is higher than any other recent midterm election, including 44 percent in 1994, the year Republicans took control of the House. If so, turnout should better the 1994 turnout rate among eligible voters of 41 percent.

There is good reason to suspect that despite the high interest, turnout will not exceed 1994. The problem is that a national poll is, well, a national poll, and does not measure attitudes of voters within states and districts.

People vote when there is a reason to do so. Republican and Democratic agendas are in stark contrast on important issues, but voters also need to believe that their vote will matter in deciding who will represent them. It is here that the American electoral system is broken for many voters.

Voters have little choice in most elections. In 1994, Congressional Quarterly called 98 House elections as competitive. Today, they list 51. To put it another way, we are already fairly confident of the winner in nearly 90 percent of House races. Although there is no similar tracking for state legislative offices, we know that the number of elections won by less than 60 percent of the vote has fallen since 1994.

The real damage to the national turnout rate is in the large states of California and New York, which together account for 17 percent of the country's eligible voters. Neither state has a competitive Senate or Governor's election, and few competitive House or state legislative races. Compare to 1994, when Californians participated in competitive Senate and governor races the state's turnout was 5 percentage points above the national rate. The same year New York's competitive governor's race helped boost turnout a point above the national rate.

Lacking stimulation from two of the largest states, turnout boosts will have to come from elsewhere. Texas has an interesting four-way governor's race that might draw from infrequent voters to the polls. Ohio's competitive Senate race and some House races might also draw voters. However, in other large states like Florida, Illinois, Michigan and Pennsylvania, turnout will suffer from largely uncompetitive statewide races.

The national turnout rate will likely be less than 1994 and fall shy of 40 percent. This is not to say that turnout will be poor everywhere. Energized voters in Connecticut get to vote in an interesting Senate race and three of five Connecticut House seats are up for grabs. The problem is that turnout will be localized in these few areas of competition.

The fault is not on the voters; people's lives are busy, and a rational person will abstain when their vote does not matter to the election outcome. The political parties also are sensitive to competition and focus their limited resources where elections are competitive. Television advertising and other mobilizing efforts by campaigns will only be found in competitive races.

The old adage of "build it and they will come" is relevant. All but hardcore sports fans tune out a blowout. Building competitive elections -- and giving voters real choices -- will do much to increase voter turnout in American politics. There are a number of reforms on the table: redistricting to create competitive districts, campaign financing to give candidates equal resources, and even altering the electoral system to fundamentally change how a vote elects representatives. If voters want choice and a government more responsive to their needs, they should consider how these seemingly arcane election procedures have real consequences on motivating them to do the most fundamental democratic action: vote.

Publication: washingtonpost.com
     
 
 




bl

Collapsible Candidates from Iowa to New Hampshire

After his first place finish in Iowa, which was supposed to propel him to a New Hampshire victory, “change” is probably a word Barack Obama does not like as much anymore. But, his support did not really change much between these two elections. He won 38 percent of Iowa’s delegates and 36 percent of New Hampshire’s vote. It was Hillary Clinton and John McCain who were the big change candidates.

What happens when a presidential candidate that does well in a primary or caucus state, does not do so well in the next? The dynamic of the presidential election can swiftly and stunningly change, as it did in New Hampshire on Tuesday.

How Barack Obama wishes John Edwards showed up in New Hampshire.

Edwards was awarded 30 percent of Iowa’s delegates, barely denying Clinton a second place finish. He finished a distant third in New Hampshire, receiving only 17 percent of the vote. There are strong indications that a shift among his supporters helped propel Hillary Clinton to her New Hampshire victory.

According to the exit polls, Edwards did 8 percentage points worse in New Hampshire among women, while Clinton did 16 percent better. Obama’s support was virtually identical, dropping a statistically insignificant 1 percentage point.

Obama’s support among young people remained strong, if slightly increasing among 18-24 and 30-39 year olds. Clinton’s support remained strong and slightly increased among those 65 and older. Edwards won Iowa’s middle-aged voters, age 40-64, but it was Clinton who decisively won this coveted age demographic in New Hampshire. And where these people were 38 percent Iowa caucus attendees, they were 54 percent of New Hampshire voters. (To understand why their turnout increased, see my analysis of Iowa’s turnout .)

Moving forward, the generational war is still a strong dynamic in the Democratic race, as evident in the candidates’ speech styles following the election results. In Iowa, Clinton was flanked by the ghosts of the Clinton administration. In New Hampshire, she shared the stage with a sea of young voters. In Iowa, Obama spoke of change, a message that resonates with younger people who are not part of the establishment. In New Hampshire his slogan was a message that echoes the can-do spirit of the greatest generation, “Yes, we can!”

In the days between Iowa and New Hampshire, Edwards spoke about how he wanted the election to become a two-way race. One should be careful with what one wishes for. Edwards and Clinton are vying for the same support base, that when united can defeat Obama, at least in New Hampshire. In the short-term, Obama most needs Edwards to do better so that support can continue to be divided.

Among Republicans, John McCain recreated his magic of eight years ago and bounced back strong from a poor Iowa showing to win New Hampshire.

The Iowa and New Hampshire electorates are so different it is difficult to compare them. In Iowa, Evangelical Christians were 60 percent of the electorate, while in New Hampshire, they were only 23 percent. Mike Huckabee’s move from first in Iowa to third in New Hampshire can be clearly attributed to the shrinking of his base. His collapse paved the way for a new winner to emerge.

It is thus tempting to attribute McCain’s victory solely to the different electorates, but he still had to defeat Mitt Romney to win New Hampshire.

According to the exit polls, the battle between McCain and Romney is a referendum on the Bush administration. Surprisingly, McCain, who has tried to rebuild bridges with the Bush establishment since his defeat in the 2000 presidential election, is still seen as the outsider and agent of change by voters participating in the Republican nomination process.

In both Iowa and New Hampshire, McCain drew his support from those who said they are angry or dissatisfied with the Bush administration. Romney drew his support from those who said they are enthusiastic or satisfied. Not surprisingly, McCain is also drawing more support from self-described Independents and Romney from Republicans.

The candidates seem to understand this dynamic, too, as they gave their speeches following the election results. In a contrived bit of acting, Romney showed up on stage without a podium and shoved a prepared speech back into his pocket (if he had needed a podium, his advance team would have provided it). He appeared relaxed, delivering his speech in a personable style reminiscent of Huckabee, who is competing with Romney for those who support Bush. But he also seemed to be reaching out to Independents with a message of change. In stark contrast, McCain delivered a carefully written, almost sedate speech designed to reassure Republicans of his conservative credentials.

This three-way dynamic between Huckabee, McCain, and Romney should prove fascinating as the Republican nomination process moves forward. Where Evangelicals are strong, Huckabee should do well. Where they are not, the rules governing if Independents can or cannot participate will dictate how McCain and Romney do. And we have yet to see regional candidates like Fred Thompson have their day in the sun. And then there is Rudy Giuliani, who is lying in wait in the larger states where his name recognition should give him a significant boost over the other candidates. All of this points to an extended campaign among Republicans.

Michael P. McDonald is an Associate Professor at George Mason University and a Non-Resident Senior Fellow at the Brookings Institution. He studies voter turnout and is a consultant to the national exit poll organization.

     
 
 




bl

Principles for Transparency and Public Participation in Redistricting


Scholars from the Brookings Institution and the American Enterprise Institute are collaborating to promote transparency in redistricting. In January 2010, an advisory board of experts and representatives of good government groups was convened in order to articulate principles for transparent redistricting and to identify barriers to the public and communities who wish to create redistricting plans. This document summarizes the principles for transparency in redistricting that were identified during that meeting.

Benefits of a Transparent, Participative Redistricting Process

The drawing of electoral districts is among the most easily manipulated and least transparent systems in democratic governance. All too often, redistricting authorities maintain their monopoly by imposing high barriers to transparency and public participation. Increasing transparency and public participation can be a powerful counterbalance by providing the public with information similar to that which is typically only available to official decision makers, which can lead to different outcomes and better representation.

Increasing transparency can empower the public to shape the representation for their communities, promote public commentary and discussion about redistricting, inform legislators and redistricting authorities which district configurations their constituents and the public support, and educate the public about the electoral process.  

Fostering public participation can enable the public to identify their neighborhoods and communities, promote the creation of alternative maps, and facilitate an exploration of a wide range of representational possibilities. The existence of publicly-drawn maps can provide a measuring stick against which an official plan can be compared, and promote the creation of a “market” for plans that support political fairness and community representational goals.

Transparency Principles

All redistricting plans should include sufficient information so the public can verify, reproduce, and evaluate a plan. Transparency thus requires that:

  • Redistricting plans must be available in non-proprietary formats.
  • Redistricting plans must be available in a format allowing them to be easily read and analyzed with commonly-used geographic information software.
  • The criteria used as a basis for creating plans and individual districts must be clearly documented.

Creating and evaluating redistricting plans and community boundaries requires access to demographic, geographic, community, and electoral data. Transparency thus requires that:

  • All data necessary to create legal redistricting plans and define community boundaries must be publicly available, under a license allowing reuse of these data for non-commercial purposes.
  • All data must be accompanied by clear documentation stating the original source, the chain of ownership (provenance), and all modifications made to it.

Software systems used to generate or analyze redistricting plans can be complex, impossible to reproduce, or impossible to correctly understand without documentation. Transparency thus requires that:

  • Software used to automatically create or improve redistricting plans must be either open-source or provide documentation sufficient for the public to replicate the results using independent software.
  • Software used to generate reports that analyze redistricting plans must be accompanied by documentation of data, methods, and procedures sufficient for the reports to be verified by the public.

Services offered to the public to create or evaluate redistricting plans and community boundaries are often opaque and subject to misinterpretation unless adequately documented. Transparency thus requires that:

  • Software necessary to replicate the creation or analysis of redistricting plans and community boundaries produced by the service must be publicly available.
  • The service must provide the public with the ability to make available all published redistricting plans and community boundaries in non-proprietary formats that are easily read and analyzed with commonly-used geographic information software.
  • Services must provide documentation of any organizations providing significant contributions to their operation.

Promoting Public Participation

New technologies provide opportunities to broaden public participation in the redistricting process. These technologies should aim to realize the potential benefits described and be consistent with the articulated transparency principles.

Redistricting is a legally and technically complex process. District creation and analysis software can encourage broad participation by: being widely accessible and easy to use; providing mapping and evaluating tools that help the public to create legal redistricting plans, as well as maps identifying local communities; be accompanied by training materials to assist the public to successfully create and evaluate legal redistricting plans and define community boundaries; have publication capabilities that allow the public to examine maps in situations where there is no access to the software; and promoting social networking and allow the public to compare, exchange and comment on both official and community-produced maps.



Official Endorsement from Organizations – Americans for Redistricting Reform, Brennan Center for Justice at New York University, Campaign Legal Center, Center for Governmental Studies, Center for Voting and Democracy, Common Cause, Demos, and the League of Women Voters of the United States.

Attending board members – Nancy Bekavac, Director, Scientists and Engineers for America; Derek Cressman, Western Regional Director of State Operations, Common Cause; Anthony Fairfax, President, Census Channel; Representative Mike Fortner (R), Illinois General Assembly; Karin Mac Donald, Director, Statewide Database, Berkeley Law, University of California, Berkeley; Leah Rush, Executive Director, Midwest Democracy Network; Mary Wilson, President, League of Women Voters.

Editors Micah Altman, Harvard University and the Brookings Institution; Thomas E. Mann, Brookings Institution; Michael P. McDonald, George Mason University and the Brookings Institution; Norman J. Ornstein, American Enterprise Institute.

This project is funded by a grant from the Sloan Foundation to the Brookings Institution and the American Enterprise Institute.

Publication: The Brookings Institution and The American Enterprise Institute
Image Source: © Lucy Nicholson / Reuters
      
 
 




bl

Midterm Elections 2010: Driving Forces, Likely Outcomes, Possible Consequences

Event Information

October 4, 2010
9:30 AM - 11:30 AM EDT

Falk Auditorium
The Brookings Institution
1775 Massachusetts Ave., NW
Washington, DC

As the recent primary in Delaware attests, this year's midterm elections continue to offer unexpected twists and raise large questions. Will the Republicans take over the House and possibly the Senate? Or has the Republican wave ebbed? What role will President Obama play in rallying seemingly dispirited Democrats -- and what effect will reaction to the sluggish economy play in rallying Republicans? Is the Tea Party more an asset or a liability to the G.O.P.'s hopes? What effect will the inevitably narrowed partisan majorities have in the last two year's of Obama's first term? And how will contests for governorships and state legislatures around the nation affect redistricting and the shape of politics to come?

On October 4, a panel of Brookings Governance Studies scholars, moderated by Senior Fellow E.J. Dionne, Jr., attempted to answer these questions. Senior Fellow Thomas Mann provided an overview. Senior Fellow Sarah Binder discussed congressional dynamics under shrunken majorities or divided government. Senior Fellow William Galston offered his views on the administration’s policy prospects during the 112th Congress. Nonresident Senior Fellow Michael McDonald addressed electoral reapportionment and redistricting around the country.

Video

Audio

Transcript

Event Materials

      
 
 




bl

Toward Public Participation in Redistricting


Event Information

January 20, 2011
9:00 AM - 12:00 PM EST

Falk Auditorium
The Brookings Institution
1775 Massachusetts Ave., NW
Washington, DC

Register for the Event

The drawing of legislative district boundaries is among the most self-interested and least transparent systems in American democratic governance. All too often, formal redistricting authorities maintain their control by imposing high barriers to transparency and to public participation in the process. Reform advocates believe that opening that process to the public could lead to different outcomes and better representation.

On January 20, Brookings hosted a briefing to review how redistricting in the 50 states will unfold in the months ahead and present a number of state-based initiatives designed to increase transparency and public participation in redistricting. Brookings Nonresident Senior Fellows Micah Altman and Michael McDonald unveiled open source mapping software which enables users to create and submit their own plans, based on current census and historical election data, to redistricting authorities and to disseminate them widely. Such alternative public maps could offer viable input to the formal redistricting process.

After each presentation, participants took audience questions.

Learn more about Michael McDonald's Public Mapping Project »

Video

Audio

Transcript

Event Materials

      
 
 




bl

The Structure of the TANF Block Grant

The 1996 welfare reform legislation replaced the Aid to Families with Dependent Children (AFDC) program with a new Temporary Assistance for Needy Families (TANF) block grant that is very different than its predecessor. In the old AFDC program, funds were used almost entirely to provide and administer cash assistance to low-income—usually single-parent—families. The federal government…

       




bl

Policy Leadership and the Blame Trap: Seven Strategies for Avoiding Policy Stalemate

Editor’s Note: This paper is part of the Governance Studies Management and Leadership Initiative. Negative messages about political opponents increasingly dominate not just election campaigns in the United States, but the policymaking process as well.  And politics dominated by negative messaging (also known as blame-generating) tends to result in policy stalemate. Negative messaging is attractive…

       




bl

The President's 2015 R&D Budget: Livin' with the blues


On March 4, President Obama submitted to Congress his 2015 budget request. Keeping with the spending cap deal agreed last December with Congress, the level of federal R&D will remain flat; and, when discounted by inflation, it is slightly lower. The requested R&D amount for 2015 is $135.4 billion, only $1.7 billion greater than 2014. If we discount from this 1.2% increase the expected inflation of 1.7% we are confronting a 0.5% decline in real terms.

Reaction of the Research Community

The litany of complaints has started. The President’s Science and Technology Advisor, John Holdren said to AAAS: “This budget required a lot of tough choices. All of us would have preferred more." The Association of American Universities, representing 60 top research universities, put out a statement declaring that this budget does “disappointingly little to close the nation’s innovation deficit,” so defined by the gap between the appropriate level of R&D investment and current spending.

What’s more, compared to 2014, the budget request has kept funding for scientific research roughly even but it has reallocated about $250 million from basic to applied research (see Table 1). Advocates of science have voiced their discontent. Take for instance the Federation of American Societies for Experimental Biology that has called the request a “disappointment to the research community” because the President’s budget came $2.5 billion short of their recommendations.

The President’s Research and Development Budget 2015

Source: OMB Budget 2015

These complaints are fully expected and even justified: each interest group must defend their share of tax-revenues. Sadly, in times of austerity, these protestations are toothless. If they were to have any traction in claiming a bigger piece of the federal discretionary pie, advocates would have to make a comparative case showing what budget lines must go down to make room for more R&D. But that line of argumentation could mean suicide for the scientific community because it would throw it into direct political contest with other interests and such contests are rarely decided by the merits of the cause but by the relative political power of interest groups. The science lobby is better off issuing innocuous hortatory pronouncements rather than picking up political fights that it cannot win.

Thus, the R&D slice is to remain pegged to the size of the total budget, which is not expected to grow, in the coming years, more than bonsai. The political accident of budget constraints is bound to change the scientific enterprise from within, not only in terms of the articulation of merits—which means more precise and compelling explanations for the relative importance of disciplines and programs—but also in terms of a shrewd political contest among science factions.

     
 
 




bl

Responsible innovation: A primer for policymakers


Technical change is advancing at a breakneck speed while the institutions that govern innovative activity slog forward trying to keep pace. The lag has created a need for reform in the governance of innovation. Reformers who focus primarily on the social benefits of innovation propose to unmoor the innovative forces of the market. Conversely, those who deal mostly with innovation’s social costs wish to constrain it by introducing regulations in advance of technological developments. In this paper, Walter Valdivia and David Guston argue for a different approach to reform the governance of innovation that they call "Responsible Innovation" because it seeks to imbue in the actors of the innovation system a more robust sense of individual and collective responsibility.

Responsible innovation appreciates the power of free markets in organizing innovation and realizing social expectations but is self-conscious about the social costs that markets do not internalize. At the same time, the actions it recommends do not seek to slow down innovation because they do not constrain the set of options for researchers and businesses, they expand it. Responsible innovation is not a doctrine of regulation and much less an instantiation of the precautionary principle. Innovation and society can evolve down several paths and the path forward is to some extent open to collective choice. The aim of a responsible governance of innovation is to make that choice more consonant with democratic principles.

Valdivia and Guston illustrate how responsible innovation can be implemented with three practical initiatives: 

  1. Industry: Incorporating values and motivations to innovation decisions that go beyond the profit motive could help industry take on a long-view of those decisions and better manage its own costs associated with liability and regulation, while reducing the social cost of negative externalities. Consequently, responsible innovation should be an integral part of corporate social responsibility, considering that the latter has already become part of the language of business, from the classroom to the board room, and that is effectively shaping, in some quarters, corporate policies and decisions.
  2. Universities and National Laboratories: Centers for Responsible Innovation, fashioned after the institutional reform of Internal Review Boards to protect human subjects in research and the Offices of Technology Transfer created to commercialize academic research, could organize existing responsible innovation efforts at university and laboratory campuses. These Centers would formalize the consideration of impacts of research proposals on legal and regulatory frameworks, economic opportunity and inequality, sustainable development and the environment, as well as ethical questions beyond the integrity of research subjects.
  3. Federal Government: Federal policy should improve its protections and support of scientific research while providing mechanisms of public accountability for research funding agencies and their contractors. Demanding a return on investment for every research grant is a misguided approach that devalues research and undermines trust between Congress and the scientific community. At the same time, scientific institutions and their advocates should improve public engagement and demonstrate their willingness and ability to be responsive to societal concerns and expectations about the public research agenda. Second, if scientific research is a public good, by definition, markets are not effective commercializing it. New mechanisms to develop practical applications from federal research with little market appeal should be introduced to counterbalance the emphasis the current technology transfer system places on research ready for the market. Third, federal innovation policy needs to be better coordinated with other federal policy, including tax, industrial, and trade policy as well as regulatory regimes. It should also improve coordination with initiatives at the local and state level to improve the outcomes of innovation for each region, state, and metro area.

Downloads

Authors

     
 
 




bl

NASA considers public values in its Asteroid Initiative


NASA’s Asteroid Initiative encompasses efforts for the human exploration of asteroids—as well as the Asteroid Grand Challenge—to enhance asteroid detection capabilities and mitigate their threat to Earth. The human space flight portion of the initiative primarily includes the Asteroid Redirect Mission (ARM), which is a proposal to put an asteroid in orbit of the moon and send astronauts to it. The program originally contemplated two alternatives for closer study: capturing a small 10m diameter asteroid versus simply recovering a boulder from a much larger asteroid. Late in March, NASA offered an update of its plans. It has decided to retrieve a boulder from an asteroid near Earth’s orbit—candidates are the asteroids 2008 EV5, Bennu, and Itokawa—and will place the boulder on the moon’s orbit to further study it.

This mission will help NASA develop a host of technical capabilities. For instance, Solar Electric Propulsion uses solar electric power to charge atoms for spacecraft propulsion—in the absence of gravity, even a modicum of force can alter the trajectory of a body in outer space. Another related capability under development is the gravity tractor, which is based on the notion that even the modest mass of a spacecraft can exert sufficient gravitational force over an asteroid to ever so slightly change its orbit. The ARM spacecraft mass could be further increased by its ability to capture a boulder from the asteroid that is steering clear of the Earth, enabling a test of how humans might prevent asteroid threats in the future. Thus, NASA will have a second test of how to deflect near-Earth objects on a hazardous trajectory. The first test, implemented as part of the Deep Impact Mission, is a kinetic impactor; that is, crashing a spacecraft on an approaching object to change its trajectory.

The Asteroid Initiative is a partner of the agency’s Near Earth Object Observation (NEOO) program. The goal of this program is to discover and monitor space objects traveling on a trajectory that could pose the risk of hitting Earth with catastrophic effects. The program also seeks to develop mitigation strategies. The capabilities developed by ARM could also support other programs of NASA, such as the manned exploration of Mars.

NEOO has recently enjoyed an uptick of public support. It used to be funded at about $4 million in the 1990s and in 2010 was allocated a paltry $6 million. But then, a redirection of priorities—linked to the transition from the Bush to the Obama administrations—increased funding for NEOO to about $20 million in 2012 and $40 million in 2014—and NASA is seeking $50 million for 2015. It is clear that NASA officials made a compelling case for the importance of NEOO; in fact, what they are asking seems quite a modest amount if indeed asteroids pose an existential risk to life on earth. At the same time, the instrumental importance of the program and the public funds devoted to it beg the question as to whether taxpayers should have a say in the decisions NASA is making regarding how to proceed with the program.

NASA has done something remarkable to help answer this question.

Last November, NASA partnered with the ECAST network (Expert and Citizen Assessment of Science and Technology) to host a citizen forum assessing the Asteroid Initiative. ECAST is a consortium of science policy and advocacy organizations which specializes in citizen deliberations on science policy. The forum consisted of a dialogue with 100 citizens in Phoenix and Boston who learned more about the asteroid initiative and then commented on various aspects of the project.

The participants, who were selected to approximate the demographics of the U.S. population, were asked to assess mitigation strategies to protect against asteroids. They were introduced to four strategies: civil defense, gravity tractor, kinetic impactor, and nuclear blast deflection. As part of the deliberations, they were asked to consider the two aforementioned approaches to perform ARM. A consensus emerged about the boulder retrieval option primarily because citizens thought that option offered better prospects for developing planetary defense technologies.  This preference existed despite the excitement of capturing a full asteroid, which could potentially have additional economic impacts. The participants showed interest in promoting the development of mitigation capabilities at least as much as they wanted to protect traditional NASA goals such as the advancement of science and space flight technology. This is not surprising given that concerns about doomsday should reasonably take precedence over traditional research and exploration concerns.

NASA could have decided to set ARM along the path of boulder retrieval exclusively on technical merits, but having conducted a citizen forum, the agency is now able to claim that this decision is also socially robust, which is to say, is responsive to public values of consensus. In this manner, NASA has shown a promising method by which research mission federal agencies can increase their public accountability.

In the same spirit of responsible research and innovation, a recent Brookings paper I authored with David Guston—who is a co-founder of ECAST—proposes a number of other innovative ways in which the innovation enterprise can be made more responsive to public values and social expectations.

Kudos to NASA for being at the forefront of innovation in space exploration and public accountability.

Image Source: © Handout . / Reuters
     
 
 




bl

The fair compensation problem of geoengineering


The promise of geoengineering is placing average global temperature under human control, and is thus considered a powerful instrument for the international community to deal with global warming. While great energy has been devoted to learning more about the natural systems that it would affect, questions of political nature have received far less consideration. Taking as a given that regional effects will be asymmetric, the nations of the world will only give their consent to deploying this technology if they can be given assurances of a fair compensation mechanism, something like an insurance policy. The question of compensation reveals that the politics of geoengineering are far more difficult than the technical aspects.

What is Geoengineering?

In June 1991, Mount Pinatubo exploded, throwing a massive amount of volcanic sulfate aerosols into the high skies. The resulting cloud dispersed over weeks throughout the planet and cooled its temperature on average 0.5° Celsius over the next two years. If this kind of natural phenomenon could be replicated and controlled, the possibility of engineering the Earth’s climate is then within reach.

Spraying aerosols in the stratosphere is one method of solar radiation management (SRM), a class of climate engineering that focuses on increasing the albedo, i.e. reflectivity, of the planet’s atmosphere. Other SRM methods include brightening clouds by increasing their content of sea salt. A second class of geo-engineering efforts focuses on carbon removal from the atmosphere and includes carbon sequestration (burying it deep underground) and increasing land or marine vegetation. Of all these methods, SRM is appealing for its effectiveness and low costs; a recent study put the cost at about $5 to $8 billion per year.1

Not only is SRM relatively inexpensive, but we already have the technological pieces that assembled properly would inject the skies with particles that reflect sunlight back into space. For instance, a fleet of modified Boeing 747s could deliver the necessary payload. Advocates of geoengineering are not too concerned about developing the technology to effect SRM, but about its likely consequences, not only in terms of slowing global warming but the effects on regional weather. And there lies the difficult question for geoengineering: the effects of SRM are likely to be unequally distributed across nations.

Here is one example of these asymmetries: Julia Pongratz and colleagues at the department of Global Ecology of the Carnegie Institution for Science estimated a net increase in yields of wheat, corn, and rice from SRM modified weather. However, the study also found a redistributive effect with equatorial countries experiencing lower yields.2 We can then expect that equatorial countries will demand fair compensation to sign on the deployment of SRM, which leads to two problems: how to calculate compensation, and how to agree on a compensation mechanism.

The calculus of compensation

What should be the basis for fair compensation? One view of fairness could be that, every year, all economic gains derived from SRM are pooled together and distributed evenly among the regions or countries that experience economic losses.

If the system pools gains from SRM and distributes them in proportion to losses, questions about the balance will only be asked in years in which gains and losses are about the same. But if losses are far greater than the gains; then this would be a form of insurance that cannot underwrite some of the incidents it intends to cover. People will not buy such an insurance policy; which is to say, some countries will not authorize SRM deployment. In the reverse, if the pool has a large balance left after paying out compensations, then winners of SRM will demand lower compensation taxes.

Further complicating the problem is the question of how to separate gains or losses that can be attributed to SRM from regional weather fluctuations. Separating the SRM effect could easily become an intractable problem because regional weather patterns are themselves affected by SRM.  For instance, any year that El Niño is particularly strong, the uncertainty about the net effect of SRM will increase exponentially because it could affect the severity of the oceanic oscillation itself. Science can reduce uncertainty but only to a certain degree, because the better we understand nature, the more we understand the contingency of natural systems. We can expect better explanations of natural phenomena from science, but it would be unfair to ask science to reduce greater understanding to a hard figure that we can plug into our compensation equation.

Still, greater complexity arises when separating SRM effects from policy effects at the local and regional level. Some countries will surely organize better than others to manage this change, and preparation will be a factor in determining the magnitude of gains or losses. Inherent to the problem of estimating gains and losses from SRM is the inescapable subjective element of assessing preparation. 

The politics of compensation

Advocates of geoengineering tell us that their advocacy is not about deploying SRM; rather, it is about better understanding the scientific facts before we even consider deployment. It’s tempting to believe that the accumulating science on SRM effects would be helpful. But when we consider the factors I just described above, it is quite possible that more science will also crystalize the uncertainty about exact amounts of compensation. The calculus of gain or loss, or the difference between the reality and a counterfactual of what regions and countries will experience requires certainty, but science only yields irreducible uncertainty about nature.

The epistemic problems with estimating compensation are only to be compounded by the political contestation of those numbers. Even within the scientific community, different climate models will yield different results, and since economic compensation is derived from those models’ output, we can expect a serious contestation of the objectivity of the science of SRM impact estimation. Who should formulate the equation? Who should feed the numbers into it? A sure way to alienate scientists from the peoples of the world is to ask them to assert their cognitive authority over this calculus. 

What’s more, other parts of the compensation equation related to regional efforts to deal with SRM effect are inherently subjective. We should not forget the politics of asserting compensation commensurate to preparation effort; countries that experience low losses may also want compensation for their efforts preparing and coping with natural disasters.

Not only would a compensation equation be a sham, it would be unmanageable. Its legitimacy would always be in question. The calculus of compensation may seem a way to circumvent the impasses of politics and define fairness mathematically. Ironically, it is shot through with subjectivity; is truly a political exercise.

Can we do without compensation?

Technological innovations are similar to legislative acts, observed Langdon Winner.3 Technical choices of the earliest stage in technical design quickly “become strongly fixed in material equipment, economic investment, and social habit, [and] the original flexibility vanishes for all practical purposes once the initial commitments are made.” For that reason, he insisted, "the same careful attention one would give to the rules, roles, and relationships of politics must also be given to such things as the building of highways, the creation of television networks, and the tailoring of seeming insignificant features on new machines."

If technological change can be thought of as legislative change, we must consider how such a momentous technology as SRM can be deployed in a manner consonant with our democratic values. Engineering the planet’s weather is nothing short of passing an amendment to Planet Earth’s Constitution. One pesky clause in that constitutional amendment is a fair compensation scheme. It seems so small a clause in comparison to the extent of the intervention, the governance of deployment and consequences, and the international commitments to be made as a condition for deployment (such as emissions mitigation and adaptation to climate change). But in the short consideration afforded here, we get a glimpse of the intractable political problem of setting up a compensation scheme. And yet, if the clause were not approved by a majority of nations, a fair compensation scheme has little hope to be consonant with democratic aspirations.


1McClellan, Justin, David W Keith, Jay Apt. 2012. Cost analysis of stratospheric albedo modification delivery systems. Environmental Research Letters 7(3): 1-8.

2Pongratz, Julia, D. B. Lobell, L. Cao, K. Caldeira. 2012. Nature Climate Change 2, 101–105.

3Winner, Langdon. 1980. Do artifacts have politics? Daedalus (109) 1: 121-136.

Image Source: © Antara Photo Agency / Reuters
      
 
 




bl

Why a Trump presidency could spell big trouble for Taiwan


Presumptive Republican presidential nominee Donald Trump’s idea to withdraw American forces from Asia—letting allies like Japan and South Korea fend for themselves, including possibly by acquiring nuclear weapons—is fundamentally unsound, as I’ve written in a Wall Street Journal op-ed.

Among the many dangers of preemptively pulling American forces out of Japan and South Korea, including an increased risk of war between Japan and China and a serious blow to the Nuclear Non-Proliferation Treaty, such a move would heighten the threat of war between China and Taiwan. The possibility that the United States would dismantle its Asia security framework could unsettle Taiwan enough that it would pursue a nuclear deterrent against China, as it has considered doing in the past—despite China indicating that such an act itself could be a pathway to war. And without bases in Japan, the United States could not as easily deter China from potential military attacks on Taiwan. 

Trump’s proposed Asia policy could take the United States and its partners down a very dangerous road. It’s an experiment best not to run.

      
 
 




bl

An accident of geography: Compassion, innovation, and the fight against poverty—A conversation with Richard C. Blum

Over the past 20 years, the proportion of the world population living in extreme poverty has decreased by over 60 percent, a remarkable achievement. Yet further progress requires expanded development finance and more innovative solutions for raising shared prosperity and ending extreme poverty. In his new book, “An Accident of Geography: Compassion, Innovation and the […]

      
 
 




bl

Power and problem solving top the agenda at Global Parliament of Mayors

When more than 40 mayors from cities around the world gathered in the fjordside city of Stavanger, Norway for the second Global Parliament of Mayors, two topics dominated the discussions: power and problem solving. The agenda included the usual sweep through the most pressing issues cities face today -- refugee resettlement, safety and security, resilience…

       




bl

Classifying Sustainable Development Goal trajectories: A country-level methodology for identifying which issues and people are getting left behind

       




bl

How much does the world spend on the Sustainable Development Goals?

Pouring several colors of paint into a single bucket produces a gray pool of muck, not a shiny rainbow. So too with discussions of financing the Sustainable Development Goals (SDGs). Jumbling too many issues into the same debate leads to policy muddiness rather than practical breakthroughs. Financing the SDGs requires a much more disaggregated mindset:…

       




bl

Leave no one behind: Time for specifics on the sustainable development goals

A central theme of the sustainable development goals (SDGs) is a pledge “that no one will be left behind.” Since the establishment of the SDGs in 2015, the importance of this commitment has only grown in political resonance throughout all parts of the globe. Yet, to drive meaningful results, the mantra needs to be matched…

       




bl

Building the SDG economy: Needs, spending, and financing for universal achievement of the Sustainable Development Goals

Pouring several colors of paint into a single bucket produces a gray pool of muck, not a shiny rainbow. Similarly, when it comes to discussions of financing the Sustainable Development Goals (SDGs), jumbling too many issues into the same debate leads to policy muddiness rather than practical breakthroughs. For example, the common “billions to trillions”…

       




bl

Are the traditional MDBs in trouble?


It certainly seems that way, judging by recent developments. Capital increases for the World Bank, for the Asian Development Bank (AsDB), for the African Development Bank (AsDB), and for the Inter-American Development Bank (IADB) are nowhere in sight, despite their constrained lending capacities. Replenishments of their soft-loan windows have been anemic. They face divisive debates about what role emerging economies should play in their governance and how their leaders should be selected. Competitors are nipping at their heels, with the Asian Infrastructure Investment Bank (AIIB) only the most recent example. News of drastic financial restructuring of the AsDB and of protracted reorganization in the World Bank add to the questions about where the traditional Multilateral Development Banks (MDBs) are headed.

So let’s unpack what are the key challenges – and the main opportunities – that the traditional MDBs face. Based on the discussion at a recent roundtable of MDB representatives organized by International Fund for Agricultural Development (IFAD) in Rome, I see seven principal challenges:

  • Progress in reducing extreme poverty and the in graduation of many low-income countries to middle-income status has reduced the rationale for aid and the apparent need for MDBs.
  • The rapid growth of development finance channels means increasing competition in a crowded field of financial actors (private and non-governmental financial flows, new development finance institutions and vertical funds, and non-traditional donors).
  • Traditional donors face increasing domestic pressure to channel aid resources through their bilateral aid organizations, and they show a growing preference to earmark their funding, rather than support general core financing for MDBs. 
  • MDBs face a dramatic growth of competing knowledge providers (international and national consulting firms, universities and think tanks).
  • Inflexible governance structures limit the attractiveness of MDBs to their borrowers and to new donors. With traditional donors unwilling to give up control over vote, voice, leadership selection and lending practices, borrowers see the MDBs as unresponsive, risk averse, burdensome and costly. Emerging economy donors find MDBs unable or unwilling to absorb increased contributions with associated shifts in votes, voice and control. And since non-governmental actors cannot participate in the MDB governance structures, they do not contribute to MDB funding.
  • The revival of Cold War/East-West confrontation risks politicizing the institutions’ lending practices – the World Bank and European Bank for Reconstruction and Development (EBRD) stopped lending to Russia in the wake of the sanctions imposed by the West – and reinforces incentives for setting up new institutions.
  • Most MDBs find it difficult to engage directly with the private and social enterprise sectors. Due to constraints in their statutes, policies and staff capacity MDBs have not been able to provide much direct financing for private investments.

But there are also opportunities that the MDBs can capitalize on:

  • Despite the challenges that MDBs face in borrowing and donor countries, overall they remain trusted partners, due to a unique combination of strengths: their traditional political neutrality, freedom from special interests and corruption, technical professionalism, long-term development perspective and hands-on program design and finance engagement. Overdue reform of MDB governance and processes and effective resistance to political pressures can increase the trust all members put in them.
  • As we face increased risks of geo-political fragmentation, regionalization, and confrontation, the world will need the truly multilateral MDBs more rather than less, since they offer globally inclusive forums and instruments to help address pressing global and regional issues.
  • Despite remarkable progress, poverty reduction remains a huge task. Elimination of extreme poverty ($1.25pd) by 2030 is a valid goal; but its achievement will not eliminate poverty. The billions of people living below $5pd are poor. Poverty reduction will remain a valid goal for MDBs long beyond 2030.
  • The Post-2015 and climate change agendas provide a window of opportunity for MDBs to demonstrate their continued, and indeed enhanced, relevance to the global sustainable development agenda in low-income and middle-income countries. The huge role of European Investment Bank in the European Union is one demonstration of the important role MDBs can play even for the advanced countries.
  • The MDBs’ unique package of services provides better value than the services offered by many competitors. Their combination of strong project preparation, supervision and finance, their attention to indebtedness constraints and sustainability requirements, their focus on policy and institutional capacity and their ability to forge multi-stakeholder partnerships provide strong and effective support. MDBs provide a steady compass in helping shift countries’ national priorities from short-term expediency to sound long-term policies and programs for sustained impact at scale.
  • MDBs have shown that they play a key role in responding to economic crises, natural disasters and conflict, as demonstrated for example by their response to the global financial and economic crisis of 2008/9.
  • MDBs can increase the leverage of their financial resources, as demonstrated by the recent restructuring of the AsDB, and broaden their engagement with the private sector, building on the successful experience of the International Finance Corporation and EBRD.

In sum, the creation of many copycat development banks demonstrates the remarkable strength and durability of the basic MDB model. As long as the traditional MDBs squarely face the challenges and opportunities, there’s plenty of life left in their old bones.

      
 
 




bl

Getting millions to learn: What will it take to accelerate progress on meeting the Sustainable Development Goals?


Event Information

April 18-19, 2016

Falk Auditorium
Brookings Institution
1775 Massachusetts Avenue NW
Washington, DC 20036

Register for the Event


In 2015, 193 countries adopted the Sustainable Development Goals (SDGs), a new global agenda that is more ambitious than the preceding Millennium Development Goals and aims to make progress on some of the most pressing issues of our time. Goal 4, "To ensure inclusive and quality education for all, with relevant and effective learning outcomes," challenges the international education community to meet universal access plus learning by 2030. We know that access to primary schooling has scaled up rapidly over previous decades, but what can be learned from places where transformational changes in learning have occurred? What can governments, civil society, and the private sector do to more actively scale up quality learning?

On April 18-19, the Center for Universal Education (CUE) at Brookings launched "Millions Learning: Scaling Up Quality Education in Developing Countries," a comprehensive study that examines where learning has improved around the world and what factors have contributed to that process. This two-day event included two sessions. Monday, April 18 focused on the role of global actors in accelerating progress to meeting the SDGs. The second session on Tuesday, April 19 included a presentation of the Millions Learning report followed by panel discussions on the role of financing and technology in scaling education in developing countries.

 Join the conversation on Twitter #MillionsLearning

Video

Audio

Transcript

Event Materials

      
 
 




bl

World Leadership for an International Problem

Editor's Note: For Campaign 2012, Ted Gayer wrote a policy brief proposing ideas for the next president on climate change. The following paper is a response to Gayer’s piece from Katherine Sierra. Charles Ebinger and Govinda Avasarala also prepared a response identifying five critical challenges the next president must address to help secure the nation’s energy…

       




bl

Progress paradoxes and sustainable growth

The past century is full of progress paradoxes, with unprecedented economic development, as evidenced by improvements in longevity, health, and literacy. At the same time, we face daunting challenges such as climate change, persistent poverty in poor and fragile states, and increasing income inequality and unhappiness in many of the richest countries. Remarkably, some of…

       




bl

Greece's financial trouble, and Europe's


I attended a fascinating dinner earlier this week with Greek Foreign Minister Nikos Kotzias as part of his whirlwind visit to Washington DC. I shared with the minister some reflections on challenges facing him and the new Greek government at home in Greece and in Europe. When I served in Prague, I often urged the Europeans to take a page from our U.S. approach in 2009-10 and to avoid excessive austerity. I reiterated that view to the minister, and in particular pointed out the need for Germany to do more to help (see, for example, my colleague Ben Bernanke's recent post on the German current account surplus in his Brookings blog.) Paul Krugman hit the nail on the head with his recent column as well. On a personal note, when my father found himself trapped in Poland in 1939 is the Nazis invaded, he made his way to Greece, which gave him shelter until he was able to escape to the United States in 1940. So I was able to thank the Foreign Minister for that as well (somewhat belatedly, but all the more heartfelt for that). I was impressed with the Minister's grasp of the Greek financial crisis and the many other important issues confronting Europe.

Authors

Image Source: © Kostas Tsironis / Reuters
      




bl

Australia’s Obligations Still Apply Despite High Court Win

      
 
 




bl

Principles for Transparency and Public Participation in Redistricting

Scholars from the Brookings Institution and the American Enterprise Institute are collaborating to promote transparency in redistricting. In January 2010, an advisory board of experts and representatives of good government groups was convened in order to articulate principles for transparent redistricting and to identify barriers to the public and communities who wish to create redistricting…

      
 
 




bl

Toward Public Participation in Redistricting

The drawing of legislative district boundaries is among the most self-interested and least transparent systems in American democratic governance. All too often, formal redistricting authorities maintain their control by imposing high barriers to transparency and to public participation in the process. Reform advocates believe that opening that process to the public could lead to different…

      
 
 




bl

The post-Paris clean energy landscape: Renewable energy in 2016 and beyond

Last year’s COP21 summit saw global economic powers and leading greenhouse gas emitters—including the United States, China, and India—commit to the most ambitious clean energy targets to date. Bolstered by sharp reductions in costs and supportive government policies, renewable power spread globally at its fastest-ever rate in 2015, accounting for more than half of the…

       




bl

Implementing Common Core: The problem of instructional time


This is part two of my analysis of instruction and Common Core’s implementation.  I dubbed the three-part examination of instruction “The Good, The Bad, and the Ugly.”  Having discussed “the “good” in part one, I now turn to “the bad.”  One particular aspect of the Common Core math standards—the treatment of standard algorithms in whole number arithmetic—will lead some teachers to waste instructional time.

A Model of Time and Learning

In 1963, psychologist John B. Carroll published a short essay, “A Model of School Learning” in Teachers College Record.  Carroll proposed a parsimonious model of learning that expressed the degree of learning (or what today is commonly called achievement) as a function of the ratio of time spent on learning to the time needed to learn.     

The numerator, time spent learning, has also been given the term opportunity to learn.  The denominator, time needed to learn, is synonymous with student aptitude.  By expressing aptitude as time needed to learn, Carroll refreshingly broke through his era’s debate about the origins of intelligence (nature vs. nurture) and the vocabulary that labels students as having more or less intelligence. He also spoke directly to a primary challenge of teaching: how to effectively produce learning in classrooms populated by students needing vastly different amounts of time to learn the exact same content.[i] 

The source of that variation is largely irrelevant to the constraints placed on instructional decisions.  Teachers obviously have limited control over the denominator of the ratio (they must take kids as they are) and less than one might think over the numerator.  Teachers allot time to instruction only after educational authorities have decided the number of hours in the school day, the number of days in the school year, the number of minutes in class periods in middle and high schools, and the amount of time set aside for lunch, recess, passing periods, various pull-out programs, pep rallies, and the like.  There are also announcements over the PA system, stray dogs that may wander into the classroom, and other unscheduled encroachments on instructional time.

The model has had a profound influence on educational thought.  As of July 5, 2015, Google Scholar reported 2,931 citations of Carroll’s article.  Benjamin Bloom’s “mastery learning” was deeply influenced by Carroll.  It is predicated on the idea that optimal learning occurs when time spent on learning—rather than content—is allowed to vary, providing to each student the individual amount of time he or she needs to learn a common curriculum.  This is often referred to as “students working at their own pace,” and progress is measured by mastery of content rather than seat time. David C. Berliner’s 1990 discussion of time includes an analysis of mediating variables in the numerator of Carroll’s model, including the amount of time students are willing to spend on learning.  Carroll called this persistence, and Berliner links the construct to student engagement and time on task—topics of keen interest to researchers today.  Berliner notes that although both are typically described in terms of motivation, they can be measured empirically in increments of time.     

Most applications of Carroll’s model have been interested in what happens when insufficient time is provided for learning—in other words, when the numerator of the ratio is significantly less than the denominator.  When that happens, students don’t have an adequate opportunity to learn.  They need more time. 

As applied to Common Core and instruction, one should also be aware of problems that arise from the inefficient distribution of time.  Time is a limited resource that teachers deploy in the production of learning.  Below I discuss instances when the CCSS-M may lead to the numerator in Carroll’s model being significantly larger than the denominator—when teachers spend more time teaching a concept or skill than is necessary.  Because time is limited and fixed, wasted time on one topic will shorten the amount of time available to teach other topics.  Excessive instructional time may also negatively affect student engagement.  Students who have fully learned content that continues to be taught may become bored; they must endure instruction that they do not need.

Standard Algorithms and Alternative Strategies

Jason Zimba, one of the lead authors of the Common Core Math standards, and Barry Garelick, a critic of the standards, had a recent, interesting exchange about when standard algorithms are called for in the CCSS-M.  A standard algorithm is a series of steps designed to compute accurately and quickly.  In the U.S., students are typically taught the standard algorithms of addition, subtraction, multiplication, and division with whole numbers.  Most readers of this post will recognize the standard algorithm for addition.  It involves lining up two or more multi-digit numbers according to place-value, with one number written over the other, and adding the columns from right to left with “carrying” (or regrouping) as needed.

The standard algorithm is the only algorithm required for students to learn, although others are mentioned beginning with the first grade standards.  Curiously, though, CCSS-M doesn’t require students to know the standard algorithms for addition and subtraction until fourth grade.  This opens the door for a lot of wasted time.  Garelick questioned the wisdom of teaching several alternative strategies for addition.  He asked whether, under the Common Core, only the standard algorithm could be taught—or at least, could it be taught first. As he explains:

Delaying teaching of the standard algorithm until fourth grade and relying on place value “strategies” and drawings to add numbers is thought to provide students with the conceptual understanding of adding and subtracting multi-digit numbers. What happens, instead, is that the means to help learn, explain or memorize the procedure become a procedure unto itself and students are required to use inefficient cumbersome methods for two years. This is done in the belief that the alternative approaches confer understanding, so are superior to the standard algorithm. To teach the standard algorithm first would in reformers’ minds be rote learning. Reformers believe that by having students using strategies in lieu of the standard algorithm, students are still learning “skills” (albeit inefficient and confusing ones), and these skills support understanding of the standard algorithm. Students are left with a panoply of methods (praised as a good thing because students should have more than one way to solve problems), that confuse more than enlighten. 

 

Zimba responded that the standard algorithm could, indeed, be the only method taught because it meets a crucial test: reinforcing knowledge of place value and the properties of operations.  He goes on to say that other algorithms also may be taught that are consistent with the standards, but that the decision to do so is left in the hands of local educators and curriculum designers:

In short, the Common Core requires the standard algorithm; additional algorithms aren’t named, and they aren’t required…Standards can’t settle every disagreement—nor should they. As this discussion of just a single slice of the math curriculum illustrates, teachers and curriculum authors following the standards still may, and still must, make an enormous range of decisions.

 

Zimba defends delaying mastery of the standard algorithm until fourth grade, referring to it as a “culminating” standard that he would, if he were teaching, introduce in earlier grades.  Zimba illustrates the curricular progression he would employ in a table, showing that he would introduce the standard algorithm for addition late in first grade (with two-digit addends) and then extend the complexity of its use and provide practice towards fluency until reaching the culminating standard in fourth grade. Zimba would introduce the subtraction algorithm in second grade and similarly ramp up its complexity until fourth grade.

 

It is important to note that in CCSS-M the word “algorithm” appears for the first time (in plural form) in the third grade standards:

 

3.NBT.2  Fluently add and subtract within 1000 using strategies and algorithms based on place value, properties of operations, and/or the relationship between addition and subtraction.

 

The term “strategies and algorithms” is curious.  Zimba explains, “It is true that the word ‘algorithms’ here is plural, but that could be read as simply leaving more choice in the hands of the teacher about which algorithm(s) to teach—not as a requirement for each student to learn two or more general algorithms for each operation!” 

 

I have described before the “dog whistles” embedded in the Common Core, signals to educational progressives—in this case, math reformers—that  despite these being standards, the CCSS-M will allow them great latitude.  Using the plural “algorithms” in this third grade standard and not specifying the standard algorithm until fourth grade is a perfect example of such a dog whistle.

 

Why All the Fuss about Standard Algorithms?

It appears that the Common Core authors wanted to reach a political compromise on standard algorithms. 

 

Standard algorithms were a key point of contention in the “Math Wars” of the 1990s.   The 1997 California Framework for Mathematics required that students know the standard algorithms for all four operations—addition, subtraction, multiplication, and division—by the end of fourth grade.[ii]  The 2000 Massachusetts Mathematics Curriculum Framework called for learning the standard algorithms for addition and subtraction by the end of second grade and for multiplication and division by the end of fourth grade.  These two frameworks were heavily influenced by mathematicians (from Stanford in California and Harvard in Massachusetts) and quickly became favorites of math traditionalists.  In both states’ frameworks, the standard algorithm requirements were in direct opposition to the reform-oriented frameworks that preceded them—in which standard algorithms were barely mentioned and alternative algorithms or “strategies” were encouraged. 

 

Now that the CCSS-M has replaced these two frameworks, the requirement for knowing the standard algorithms in California and Massachusetts slips from third or fourth grade all the way to sixth grade.  That’s what reformers get in the compromise.  They are given a green light to continue teaching alternative algorithms, as long as the algorithms are consistent with teaching place value and properties of arithmetic.  But the standard algorithm is the only one students are required to learn.  And that exclusivity is intended to please the traditionalists.

 

I agree with Garelick that the compromise leads to problems.  In a 2013 Chalkboard post, I described a first grade math program in which parents were explicitly requested not to teach the standard algorithm for addition when helping their children at home.  The students were being taught how to represent addition with drawings that clustered objects into groups of ten.  The exercises were both time consuming and tedious.  When the parents met with the school principal to discuss the matter, the principal told them that the math program was following the Common Core by promoting deeper learning.  The parents withdrew their child from the school and enrolled him in private school.

 

The value of standard algorithms is that they are efficient and packed with mathematics.  Once students have mastered single-digit operations and the meaning of place value, the standard algorithms reveal to students that they can take procedures that they already know work well with one- and two-digit numbers, and by applying them over and over again, solve problems with large numbers.  Traditionalists and reformers have different goals.  Reformers believe exposure to several algorithms encourages flexible thinking and the ability to draw on multiple strategies for solving problems.  Traditionalists believe that a bigger problem than students learning too few algorithms is that too few students learn even one algorithm.

 

I have been a critic of the math reform movement since I taught in the 1980s.  But some of their complaints have merit.  All too often, instruction on standard algorithms has left out meaning.  As Karen C. Fuson and Sybilla Beckmann point out, “an unfortunate dichotomy” emerged in math instruction: teachers taught “strategies” that implied understanding and “algorithms” that implied procedural steps that were to be memorized.  Michael Battista’s research has provided many instances of students clinging to algorithms without understanding.  He gives an example of a student who has not quite mastered the standard algorithm for addition and makes numerous errors on a worksheet.  On one item, for example, the student forgets to carry and calculates that 19 + 6 = 15.  In a post-worksheet interview, the student counts 6 units from 19 and arrives at 25.  Despite the obvious discrepancy—(25 is not 15, the student agrees)—he declares that his answers on the worksheet must be correct because the algorithm he used “always works.”[iii] 

 

Math reformers rightfully argue that blind faith in procedure has no place in a thinking mathematical classroom. Who can disagree with that?  Students should be able to evaluate the validity of answers, regardless of the procedures used, and propose alternative solutions.  Standard algorithms are tools to help them do that, but students must be able to apply them, not in a robotic way, but with understanding.

 

Conclusion

Let’s return to Carroll’s model of time and learning.  I conclude by making two points—one about curriculum and instruction, the other about implementation.

In the study of numbers, a coherent K-12 math curriculum, similar to that of the previous California and Massachusetts frameworks, can be sketched in a few short sentences.  Addition with whole numbers (including the standard algorithm) is taught in first grade, subtraction in second grade, multiplication in third grade, and division in fourth grade.  Thus, the study of whole number arithmetic is completed by the end of fourth grade.  Grades five through seven focus on rational numbers (fractions, decimals, percentages), and grades eight through twelve study advanced mathematics.  Proficiency is sought along three dimensions:  1) fluency with calculations, 2) conceptual understanding, 3) ability to solve problems.

Placing the CCSS-M standard for knowing the standard algorithms of addition and subtraction in fourth grade delays this progression by two years.  Placing the standard for the division algorithm in sixth grade continues the two-year delay.   For many fourth graders, time spent working on addition and subtraction will be wasted time.  They already have a firm understanding of addition and subtraction.  The same thing for many sixth graders—time devoted to the division algorithm will be wasted time that should be devoted to the study of rational numbers.  The numerator in Carroll’s instructional time model will be greater than the denominator, indicating the inefficient allocation of time to instruction.

As Jason Zimba points out, not everyone agrees on when the standard algorithms should be taught, the alternative algorithms that should be taught, the manner in which any algorithm should be taught, or the amount of instructional time that should be spent on computational procedures.  Such decisions are made by local educators.  Variation in these decisions will introduce variation in the implementation of the math standards.  It is true that standards, any standards, cannot control implementation, especially the twists and turns in how they are interpreted by educators and brought to life in classroom instruction.  But in this case, the standards themselves are responsible for the myriad approaches, many unproductive, that we are sure to see as schools teach various algorithms under the Common Core.


[i] Tracking, ability grouping, differentiated learning, programmed learning, individualized instruction, and personalized learning (including today’s flipped classrooms) are all attempts to solve the challenge of student heterogeneity.  

[ii] An earlier version of this post incorrectly stated that the California framework required that students know the standard algorithms for all four operations by the end of third grade. I regret the error.

[iii] Michael T. Battista (2001).  “Research and Reform in Mathematics Education,” pp. 32-84 in The Great Curriculum Debate: How Should We Teach Reading and Math? (T. Loveless, ed., Brookings Instiution Press).

Authors

     
 
 




bl

Three cheers for logrolling: The demise of the Sustainable Growth Rate (SGR)


Editor's note: This post originally appeared in the New England Journal of Medicine's Perspective online series on April 22, 2015.

Congress has finally euthanized the sustainable growth rate formula (SGR). Enacted in 1997 and intended to hold down growth of Medicare spending on physician services, the formula initially worked more or less as intended. Then it began to call for progressively larger and more unrealistic fee cuts — nearly 30% in some years, 21% in 2015. Aware that such cuts would be devastating, Congress repeatedly postponed them, and most observers understood that such cuts would never be implemented. Still, many physicians fretted that the unthinkable might happen.

Now Congress has scrapped the SGR, replacing it with still-embryonic but promising incentives that could catalyze increased efficiency and greater cost control than the old, flawed formula could ever really have done, in a law that includes many other important provisions. How did such a radical change occur?  And why now?

The “how” was logrolling — the trading of votes by legislators in order to pass legislation of interest to each of them. Logrolling has become a dirty word, a much-reviled political practice. But the Medicare Access and CHIP (Children’s Health Insurance Program) Reauthorization Act (MACRA), negotiated by House leaders John Boehner (R-OH) and Nancy Pelosi (D-CA) and their staffs, is a reminder that old-time political horse trading has much to be said for it.

The answer to “why now?” can be found in the technicalities of budget scoring. Under the SGR, Medicare’s physician fees were tied through a complex formula to a target based on caseloads, practice costs, and the gross domestic product. When current spending on physician services exceeded the targets, the formula called for fee cuts to be applied prospectively. Fee cuts that were not implemented were carried forward and added to any future cuts the formula might generate. Because Congress repeatedly deferred cuts, a backlog developed. By 2012, this backlog combined with assumed rapid future growth in Medicare spending caused the Congressional Budget Office (CBO) to estimate the 10-year cost of repealing the SGR at a stunning $316 billion.

For many years, Congress looked the costs of repealing the SGR squarely in the eye — and blinked. The cost of a 1-year delay, as estimated by the CBO, was a tiny fraction of the cost of repeal. So Congress delayed — which is hardly surprising.

But then, something genuinely surprising did happen. The growth of overall health care spending slowed, causing the CBO to slash its estimates of the long-term cost of repealing the SGR. By 2015, the 10-year price of repeal had fallen to $136 billion. Even this number was a figment of budget accounting, since the chance that the fee cuts would ever have been imposed was minuscule. But the smaller number made possible the all-too-rare bipartisan collaboration that produced the legislation that President Barack Obama has just signed.

The core of the law is repeal of the SGR and abandonment of the 21% cut in Medicare physician fees it called for this year. In its place is a new method of paying physicians under Medicare. Some elements are specified in law; some are to be introduced later. The hard-wired elements include annual physician fee updates of 0.5% per year through 2019 and 0% from 2020 through 2025, along with a “merit-based incentive payment system” (MIPS) that will replace current incentive programs that terminate in 2018. The new program will assess performance in four categories: quality of care, resource use, meaningful use of electronic health records, and clinical practice improvement activities. Bonuses and penalties, ranging from +12% to –4% in 2020, and increasing to +27% to –9% for 2022 and later, will be triggered by performance scores in these four areas. The exact content of the MIPS will be specified in rules that the secretary of health and human services is to develop after consultation with physicians and other health care providers.

Higher fees will be available to professionals who work in “alternative payment organizations” that typically will move away from fee-for-service payment, cover multiple services, show that they can limit the growth of spending, and use performance-based methods of compensation. These and other provisions will ramp up pressure on physicians and other providers to move from traditional individual or small-group fee-for-service practices into risk-based multi-specialty settings that are subject to management and oversight more intense than that to which most practitioners are yet accustomed.

Both parties wanted to bury the SGR. But MACRA contains other provisions, unrelated to the SGR, that appeal to discrete segments of each party. Democrats had been seeking a 4-year extension of CHIP, which serves 8 million children and pregnant women. They were running into stiff head winds from conservatives who wanted to scale back the program. MACRA extends CHIP with no cuts but does so for only 2 years.  It also includes a number of other provisions sought by Democrats: a 2-year extension of the Maternal, Infant, and Early Childhood Home Visiting program, plus permanent extensions of the Qualified Individual program, which pays Part B Medicare premiums for people with incomes just over the federal poverty thresholds, and transitional medical assistance, which preserves Medicaid eligibility for up to 1 year after a beneficiary gets a job.

The law also facilitates access to health benefits. MACRA extends for two years states’ authority to enroll applicants for health benefits on the basis of data on income, household size, and other factors gathered when people enroll in other programs such as the Supplemental Nutrition Assistance Program, the National School Lunch Program, Temporary Assistance to Needy Families (“welfare”), or Head Start. It also provides $7.2 billion over the next two years to support community health centers, extending funding established in the Affordable Care Act.

Elements of each party, concerned about budget deficits, wanted provisions to pay for the increased spending. They got some of what they wanted, but not enough to prevent some conservative Republicans in both the Senate and the House from opposing final passage. Many conservatives have long sought to increase the proportion of Medicare Part B costs that are covered by premiums. Most Medicare beneficiaries pay Part B premiums covering 25% of the program’s actuarial value. Relatively high-income beneficiaries pay premiums that cover 35, 50, 65, or 80% of that value, depending on their income. Starting in 2018, MACRA will raise the 50% and 65% premiums to 65% and 80%, respectively, affecting about 2% of Medicare beneficiaries. No single person with an income (in 2015 dollars) below $133,501 or couple with income below $267,001 would be affected initially. MACRA freezes these thresholds through 2019, after which they are indexed for inflation. Under previous law, the thresholds were to have been greatly increased in 2019, reducing the number of high-income Medicare beneficiaries to whom these higher premiums would have applied. (For reference, half of all Medicare beneficiaries currently have incomes below $26,000 a year.)

A second provision bars Medigap plans from covering the Part B deductible, which is now $147. By exposing more people to deductibles, this provision will cause some reduction in Part B spending. Everyone who buys such plans will see reduced premiums; some will face increased out-of-pocket costs. The financial effects either way will be small.

Inflexible adherence to principle contributes to the political gridlock that has plunged rates of public approval of Congress to subfreezing lows. MACRA is a reminder of the virtues of compromise and quiet negotiation. A small group of congressional leaders and their staffs crafted a law that gives something to most members of both parties. Today’s appalling norm of poisonously polarized politics make this instance of political horse trading seem nothing short of miraculous.

Authors

Publication: NEJM
     
 
 




bl

Why fewer jobless Americans are counting on disability


As government funding for disability insurance is expected to run out next year, Congress should re-evaluate the costs of the program.

Nine million people in America today are receiving Social Security Disability Insurance, double the number in 1995 and six times the number in 1970. With statistics like that, it’s hardly surprising to see some in Congress worry that more will enroll in the program and costs would continue to rise, especially since government funding for disability insurance is expected to run out by the end of next year. If Congress does nothing, benefits would fall by 19% immediately following next year’s presidential election. So, Congress will likely do something. But what exactly should it do?

Funding for disability insurance has nearly run out of money before. Each time, Congress has simply increased the share of the Social Security payroll tax that goes for disability insurance. This time, however, many members of Congress oppose such a shift unless it is linked to changes that curb eligibility and promote return to work. They fear that rolls will keep growing and costs would keep rising, but findings from a report by a government panel conclude that disability insurance rolls have stopped rising and will likely shrink. The report, authored by a panel of the Social Security Advisory Board, is important in that many of the factors that caused disability insurance to rise, particularly during the Great Recession, have ended.

  • Baby-boomers, who added to the rolls as they reached the disability-prone middle age years, are aging out of disability benefits and into retirement benefits. 

  • The decades-long flood of women increased the pool of people with the work histories needed to be eligible for disability insurance. But women’s labor force participation has fallen a bit from pre-Great Recession peaks, and is not expected again to rise materially. 

  • The Great Recession, which led many who lost jobs and couldn’t find work to apply for disability insurance, is over and applications are down. A recession as large as that of 2008 is improbable any time soon. 

  • Approval rates by administrative law judges, who for many years were suspected of being too ready to approve applications, have been falling. Whatever the cause, this stringency augurs a fall in the disability insurance rolls.

Nonetheless, the Disability Insurance program is not without serious flaws. At the front end, employers, who might help workers with emerging impairments remain on the job by providing therapy or training, have little incentive to do either. Employers often save money if workers leave and apply for benefits. Creating a financial incentive to encourage employers to help workers stay active is something both liberals and conservatives can and should embrace. Unfortunately, figuring out exactly how to do that remains elusive.

At the next stage, applicants who are initially denied benefits confront intolerable delays. They must wait an average of nearly two years to have their cases finally decided and many wait far longer. For the nearly 1 million people now in this situation, the effects can be devastating. As long as their application is pending, applicants risk immediate rejection if they engage in ‘substantial gainful activity,’ which is defined as earning more than $1,090 in any month. This virtual bar on work brings a heightened risk of utter destitution. Work skills erode and the chance of ever reentering the workforce all but vanishes. Speeding eligibility determination is vital but just how to do so is also enormously controversial.

For workers judged eligible for benefits, numerous provisions intended to encourage work are not working. People have advanced ideas on how to help workers regain marketplace skills and to make it worthwhile for them to return to work. But evidence that they will work is scant.

The problems are clear enough. As noted, solutions are not. Analysts have come up with a large number of proposed changes in the program. Two task forces, one organized by The Bipartisan Policy Center and one by the Committee for a Responsible Federal Budget, have come up with lengthy menus of possible modifications to the current program. Many have theoretical appeal. None has been sufficiently tested to allow evidence-based predictions on how they would work in practice.

So, with the need to do something to sustain benefits and to do it fast, Congress confronts a program with many problems for which a wide range of untested solutions have been proposed. Studies and pilots of some of these ideas are essential and should accompany the transfer of payroll tax revenues necessary to prevent a sudden and unjustified cut in benefits for millions of impaired people who currently have little chance of returning to work. Implementing such a research program now will enable Congress to improve a program that is vital, but that is acknowledged to have serious problems.

And the good news, delivered by a group of analysts, is that rapid growth of enrollments will not break the bank before such studies can be carried out.



Editor's Note: This post originally appeared on Fortune Magazine.

Authors

Publication: Fortune Magazine
Image Source: © Randall Hill / Reuters
     
 
 




bl

Is the ACA in trouble?


Editor's Note: This post originally appeared in InsideSources. The author wishes to thank Kevin Lucia for helpful comments and suggestions.

United Health Care’s surprise announcement that it is considering whether to stop selling health insurance through the Affordable Care Act’s health exchanges in 2017 and is also pulling marketing and broker commissions in 2016 has health policy analysts scratching their heads. The announcement is particularly puzzling, as just a month ago, United issued a bullish announcement that it was planning to expand to 11 additional individual markets, taking its total to 34.

United’s stated reason is that this business is unprofitable. That may be true, but it is odd that the largest health insurer in the nation would vacate a growing market without putting up a fight. Is United’s announcement seriously bad news for Obamacare, as many commentators have asserted? Is United seeking concessions in another area and using this announcement as a bargaining chip? Or, is something else going on? The answer, I believe, is that the announcement, while a bit of all of these things, is less significant than many suppose.

To make sense of United’s actions, one has to understand certain peculiarities of United’s business model and some little-understood aspects of the Affordable Care Act.

  • Most of United’s business consists of group sales of insurance through employers who offer plans to their employees as a fringe benefit. United has chosen not to sell insurance aggressively to individuals in most places and, where it does, not to offer the lowest-premium plans. In some states, it does not sell to individuals at all.
  • In 49 states, insurers may sell plans either through the ACA health exchange or directly to customers outside the exchanges. The exceptions are Vermont and the District of Columbia in which individuals buying insurance must go through their exchanges. Thus, insurers may find that “good” risks—those with below-average use of health care—disproportionately buy directly, while the “poor” risks buy through the exchanges.
  • State regulators must review insurance premiums to assure that they are reasonable and set other rules that insurers must follow. This process typically involves some negotiation. With varying skill and intensity, state insurance commissioners try to hold down prices. If they are too lax, buyers may be overcharged. If they are too aggressive, insurers may simply withdraw from the market, causing politically-unpopular inconvenience. These negotiations go on separately in 50 states and the District of Columbia each and every year.
  • Finally, fewer people are now expected to buy insurance through the health exchanges than was expected a couple of years ago. ACA subsidies are modest for people with moderate incomes and the penalties for not carrying insurance have been small. Some people with modest incomes face high deductibles, high out-of-pocket costs, narrow networks of providers, or some mix of all three. As a result, some people who expected not to need much health care have chosen to ‘go bare’ and pay the modest penalties for not carrying insurance.

What seems to have happened—one can’t be sure, as the United announcement is Delphic—is that the company, which mostly delayed its participation in the individual exchanges until 2015, incurred substantial start-up costs, enrolled few customers who turned out to be sicker than anticipated, and experienced more-than-anticipated attrition. Other insurers, including Blue-Cross/Blue-Shield plans nation-wide which hold a dominant position in individual markets in many states, did well enough so that Joseph Swedish, CEO of Anthem, Inc., one of the largest of the ‘Blues,’ announced that his company is firmly committed to the exchanges. But minor players in the individual market, such as United, may have concluded that the costs of developing that market are too high for the expected pay-off.

In evaluating these diverse factors, one needs to recognize that the ACA, in general, and the health exchanges, in particular, have changed insurance markets in fundamental ways. Millions of people who were previously uninsured are now trying to understand the bewildering complexities of health insurance. Insurance companies have a lot to learn, too. The ACA now bars insurance companies from ‘underwriting’—the practice of varying premiums based on the characteristics of individual customers, something at which they were quite expert. Under the ACA, insurance companies must sell insurance to all comers, however sick they may be, and must charge premiums that can vary only based on age. Now, companies must ‘manage’ risk, which is easier for a company with a large market share of the individual market, as the Blues have in most states, than it is for a company like United with only a small share.

What this means is that United’s announcement is regrettable news for those states from which they may decide to withdraw, as its departure would reduce competition. United might also use the threat of departure to negotiate favorable terms with states and the Administration. And it means that federal regulators need to write regulations to discourage individual customers from practices that unfairly saddle insurers with risks, such as buying insurance outside open-enrollment periods designed for exceptional circumstances and then dropping coverage a few months later. But it would be a mistake to treat United’s announcement, presumably made for good and sufficient business reasons, as a portentous omen of an ACA crisis.

Authors

Publication: InsideSources
     
 
 




bl

The impossible (pipe) dream—single-payer health reform


Led by presidential candidate Bernie Sanders, one-time supporters of ‘single-payer’ health reform are rekindling their romance with a health reform idea that was, is, and will remain a dream.  Single-payer health reform is a dream because, as the old joke goes, ‘you can’t get there from here.

Let’s be clear: opposing a proposal only because one believes it cannot be passed is usually a dodge.One should judge the merits. Strong leaders prove their skill by persuading people to embrace their visions. But single-payer is different. It is radical in a way that no legislation has ever been in the United States.

Not so, you may be thinking. Remember such transformative laws as the Social Security Act, Medicare, the Homestead Act, and the Interstate Highway Act. And, yes, remember the Affordable Care Act. Those and many other inspired legislative acts seemed revolutionary enough at the time. But none really was. None overturned entrenched and valued contractual and legislative arrangements. None reshuffled trillions—or in less inflated days, billions—of dollars devoted to the same general purpose as the new legislation. All either extended services previously available to only a few, or created wholly new arrangements.

To understand the difference between those past achievements and the idea of replacing current health insurance arrangements with a single-payer system, compare the Affordable Care Act with Sanders’ single-payer proposal.

Criticized by some for alleged radicalism, the ACA is actually stunningly incremental. Most of the ACA’s expanded coverage comes through extension of Medicaid, an existing public program that serves more than 60 million people. The rest comes through purchase of private insurance in “exchanges,” which embody the conservative ideal of a market that promotes competition among private venders, or through regulations that extended the ability of adult offspring to remain covered under parental plans. The ACA minimally altered insurance coverage for the 170 million people covered through employment-based health insurance. The ACA added a few small benefits to Medicare but left it otherwise untouched. It left unaltered the tax breaks that support group insurance coverage for most working age Americans and their families. It also left alone the military health programs serving 14 million people. Private nonprofit and for-profit hospitals, other vendors, and privately employed professionals continue to deliver most care.

In contrast, Senator Sanders’ plan, like the earlier proposal sponsored by Representative John Conyers (D-Michigan) which Sanders co-sponsored, would scrap all of those arrangements. Instead, people would simply go to the medical care provider of their choice and bills would be paid from a national trust fund. That sounds simple and attractive, but it raises vexatious questions.

  • How much would it cost the federal government? Where would the money to cover the costs come from?
  • What would happen to the $700 billion that employers now spend on health insurance?
  • How would the $600 billion a year reductions in total health spending that Sanders says his plan would generate come from?
  • What would happen to special facilities for veterans and families of members of the armed services?

Sanders has answers for some of these questions, but not for others. Both the answers and non-answers show why single payer is unlike past major social legislation.

The answer to the question of how much single payer would cost the federal government is simple: $4.1 trillion a year, or $1.4 trillion more than the federal government now spends on programs that the Sanders plan would replace. The money would come from new taxes. Half the added revenue would come from doubling the payroll tax that employers now pay for Social Security. This tax approximates what employers now collectively spend on health insurance for their employees...if they provide health insurance. But many don’t. Some employers would face large tax increases. Others would reap windfall gains.

The cost question is particularly knotty, as Sanders assumes a 20 percent cut in spending averaged over ten years, even as roughly 30 million currently uninsured people would gain coverage. Those savings, even if actually realized, would start slowly, which means cuts of 30 percent or more by Year 10. Where would they come from? Savings from reduced red-tape associated with individual insurance would cover a small fraction of this target. The major source would have to be fewer services or reduced prices. Who would determine which of the services physicians regard as desirable -- and patients have come to expect -- are no longer ‘needed’? How would those be achieved without massive bankruptcies among hospitals, as columnist Ezra Klein has suggested, and would follow such spending cuts? What would be the reaction to the prospect of drastic cuts in salaries of health care personnel – would we have a shortage of doctors and nurses? Would patients tolerate a reduction in services? If people thought that services under the Sanders plan were inadequate, would they be allowed to ‘top up’ with private insurance? If so, what happens to simplicity? If not, why not?

Let me be clear: we know that high quality health care can be delivered at much lower cost than is the U.S. norm. We know because other countries do it. In fact, some of them have plans not unlike the one Senator Sanders is proposing. We know that single-payer mechanisms work in some countries. But those systems evolved over decades, based on gradual and incremental change from what existed before. That is the way that public policy is made in democracies. Radical change may occur after a catastrophic economic collapse or a major war. But in normal times, democracies do not tolerate radical discontinuity. If you doubt me, consider the tumult precipitated by the really quite conservative Affordable Care Act.


Editor's note: This piece originally appeared in Newsweek.

Authors

Publication: Newsweek
Image Source: © Jim Young / Reuters
      
 
 




bl

Recent Social Security blogs—some corrections


Recently, Brookings has posted two articles commenting on proposals to raise the full retirement age for Social Security retirement benefits from 67 to 70. One revealed a fundamental misunderstanding of how the program actually works and what the effects of the policy change would be. The other proposes changes to the system that would subvert the fundamental purpose of the Social Security in the name of ‘reforming’ it.

A number of Republican presidential candidates and others have proposed raising the full retirement age. In a recent blog, Robert Shapiro, a Democrat, opposed this move, a position I applaud. But he did so based on alleged effects the proposal would in fact not have, and misunderstanding about how the program actually works. In another blog, Stuart Butler, a conservative, noted correctly that increasing the full benefit age would ‘bolster the system’s finances,’ but misunderstood this proposal’s effects. He proposed instead to end Social Security as a universal pension based on past earnings and to replace it with income-related welfare for the elderly and disabled (which he calls insurance).

Let’s start with the misunderstandings common to both authors and to many others. Each writes as if raising the ‘full retirement age’ from 67 to 70 would fall more heavily on those with comparatively low incomes and short life expectancies. In fact, raising the ‘full retirement age’ would cut Social Security Old-Age Insurance benefits by the same proportion for rich and poor alike, and for people whose life expectancies are long or short. To see why, one needs to understand how Social Security works and what ‘raising the full retirement age’ means.

People may claim Social Security retirement benefits starting at age 62. If they wait, they get larger benefits—about 6-8 percent more for each year they delay claiming up to age 70. Those who don’t claim their benefits until age 70 qualify for benefits -- 77 percent higher than those with the same earnings history who claim at age 62. The increments approximately compensate the average person for waiting, so that the lifetime value of benefits is independent of the age at which they claim. Mechanically, the computation pivots on the benefit payable at the ‘full retirement age,’ now age 66, but set to increase to age 67 under current law. Raising the full retirement age still more, from 67 to 70, would mean that people age 70 would get the same benefit payable under current law at age 67. That is a benefit cut of 24 percent. Because the annual percentage adjustment for waiting to claim would be unchanged, people who claim benefits at any age, down to age 62, would also receive benefits reduced by 24 percent.

In plain English, ‘raising the full benefit age from 67 to 70' is simply a 24 percent across-the-board cut in benefits for all new claimants, whatever their incomes and whatever their life-expectancies.

Thus, Robert Shapiro mistakenly writes that boosting the full-benefit age would ‘effectively nullify Social Security for millions of Americans’ with comparatively low life expectancies. It wouldn’t. Anyone who wanted to claim benefits at age 62 still could. Their benefits would be reduced. But so would benefits of people who retire at older ages.

Equally mistaken is Stuart Butler’s comment that increasing the full-benefit age from 67 to 70 would ‘cut total lifetime retirement benefits proportionately more for those on the bottom rungs of the income ladder.’ It wouldn’t. The cut would be proportionately the same for everyone, regardless of past earnings or life expectancy.

Both Shapiro and Butler, along with many others including my other colleagues Barry Bosworth and Gary Burtless, have noted correctly that life expectancies of high earners have risen considerably, while those of low earners have risen little or not at all. As a result, the lifetime value of Social Security Old-Age Insurance benefits has grown more for high- than for low-earners. That development has been at least partly offset by trends in Social Security Disability Insurance, which goes disproportionately to those with comparatively low earnings and life expectancies and which has been growing far faster than Old-Age Insurance, the largest component of Social Security.

But even if the lifetime value of all Social Security benefits has risen faster for high earners than for low earners, an across the board cut in benefits does nothing to offset that trend. In the name of lowering overall Social Security spending, it would cut benefits by the same proportion for those whose life expectancies have risen not at all because the life expectancy of others has risen. Such ‘evenhandeness’ calls to mind Anatole France’s comment that French law ‘in its majestic equality, ...forbids rich and poor alike to sleep under bridges, beg in streets, or steal loaves of bread.’

Faulty analyses, such as those of Shapiro and Butler, cannot conceal a genuine challenge to policy makers. Social Security does face a projected, long-term funding shortfall. Trends in life expectancies may well have made the system less progressive overall than it was in the past. What should be done?

For starters, one needs to recognize that for those in successive age cohorts who retire at any given age, rising life expectancy does not lower, but rather increases their need for Social Security retirement benefits because whatever personal savings they may have accumulated gets stretched more thinly to cover more retirement years.

For those who remain healthy, the best response to rising longevity may be to retire later. Later retirement means more time to save and fewer years to depend on savings. Here is where the wrong-headedness of Butler’s proposal, to phase down benefits for those with current incomes of $25,000 or more and eliminate them for those with incomes over $100,000, becomes apparent. The only source of income for full retirees is personal savings and, to an ever diminishing degree, employer-financed pensions. Converting Social Security from a program whose benefits are based on past earnings to one that is based on current income from savings would impose a tax-like penalty on such savings, just as would a direct tax on those savings. Conservatives and liberals alike should understand that taxing something is not the way to encourage it.

Still, working longer by definition lowers retirement income needs. That is why some analysts have proposed raising the age at which retirement benefits may first be claimed from age 62 to some later age. But this proposal, like across-the-board benefit cuts, falls alike on those who can work longer without undue hardship and on those in physically demanding jobs they can no longer perform, those whose abilities are reduced, and those who have low life expectancies. This group includes not only blue-collar workers, but also many white-collar employees, as indicated by a recent study of the Boston College Retirement Center. If entitlement to Social Security retirement benefits is delayed, it is incumbent on policymakers to link that change to other ‘backstop’ policies that protect those for whom continued work poses a serious burden. It is also incumbent on private employers to design ways to make workplaces friendlier to an aging workforce.

The challenge of adjusting Social Security in the face of unevenly distributed increases in longevity, growing income inequality, and the prospective shortfall in Social Security financing is real. The issues are difficult. But solutions are unlikely to emerge from confusion about the way Social Security operates and the actual effects of proposed changes to the program. And it will not be advanced by proposals that would bring to Social Security the failed Vietnam War strategy of destroying a village in order to save it.

Authors

Image Source: © Sam Mircovich / Reuters
      
 
 




bl

Not just a typographical change: Why Brookings is capitalizing Black

Brookings is adopting a long-overdue policy to properly recognize the identity of Black Americans and other people of ethnic and indigenous descent in our research and writings. This update comes just as the 1619 Project is re-educating Americans about the foundational role that Black laborers played in making American capitalism and prosperity possible. Without Black…

       




bl

Walk this Way:The Economic Promise of Walkable Places in Metropolitan Washington, D.C.

An economic analysis of a sample of neighborhoods in the Washington, D.C. metropolitan area using walkability measures finds that: More walkable places perform better economically. For neighborhoods within metropolitan Washington, as the number of environmental features that facilitate walkability and attract pedestrians increase, so do office, residential, and retail rents, retail revenues, and for-sale…

       




bl

Catalytic development: (Re)creating walkable urban places

Since the mid-1990s, demographic and economic shifts have fundamentally changed markets and locations for real estate development. These changes are largely powered by growth of the knowledge economy, which, since the turn of the 21st century, has begun moving out of suburban office parks and into more walkable mixed-use places in an effort to attract…

       




bl

Catalytic development: (Re)making walkable urban places

Over the past several decades, demographic shifts and the rise of the knowledge economy have led to increasing demand for more walkable, mixed-use urban places.  Catalytic development is a new model of investment that takes a large scale, long-term approach to recreating such communities. The objectives of this model are exemplified in Amazon’s RFP for…

       




bl

How Fear of Cities Can Blind Us From Solutions to COVID-19