not

Looking Forward, Not Backward: Refining American Interrogation Law

The following is part of the Series on Counterterrorism and American Statutory Law, a joint project of the Brookings Institution, the Georgetown University Law Center, and the Hoover Institution Introduction The worldwide scandal spurred by the abuse of prisoners in Abu Ghraib, Guantánamo, Afghanistan and secret CIA prisons during the Bush Administration has been a…

       




not

Credit Crisis: The Sky is not Falling

U.S. stock markets are gyrating on news of an apparent credit crunch generated by defaults among subprime home mortgage loans. Such frenzy has spurred Wall Street to cry capital crisis. However, there is no shortage of capital – only a shortage of confidence in some of the instruments Wall Street has invented. Much financial capital…

       




not

Are Americans sliding into another war?

The current U.S. administration has wrapped up U.S. involvement in a mistaken war in Iraq (albeit on a schedule set by the previous administration, and with subsequent reintroduction of some U.S. military personnel into Iraq), has wound down U.S. involvement in a war in Afghanistan that had metamorphosed from a counterterrorist operation into a nation-building…

      
 
 




not

The Rohingya people need help, but Aung San Suu Kyi is not to blame for their mistreatment

       




not

Can the US sue China for COVID-19 damages? Not really.

       




not

The GDP Report Is Not As Bad As It Looks


My first response to the GDP report was “holy cow!”-- it’s not often that the U.S. economy contracts, and the headline says that this just happened in the final quarter of 2012. Many had expected weak growth; none had seen a contraction coming. But once you take a deep breath, read past the headline, and delve into the numbers, you’ll see that this is actually a pretty good (though not great) report. The internals are much better than the top-line belies. Under the hood, we see solid growth in both consumption and investment and as a result, private spending was humming along. Last quarter’s decline in U.S. GDP was all about inventories (which subtracted 1.3 percentage points from growth), as well as sharp cuts in defense spending. Neither of these are expected to persist.

And let’s not forget that this is the "advance" GDP estimate, which is only an early (an often inaccurate) guess as to what was happening. Typically, this estimate misses the mark by a full 1.3 percentage points.

I'm sure we will start seeing the use of the dreaded "R" word (recession). That's premature, and almost certainly wrong. The U.S. economy is growing, although probably slower than potential. Don’t let me overstate my sunny optimism though—the recovery is still precarious, and Congress could still blow it up.

Overall, there's nothing in today's GDP report to change my view: The U.S. economy was doing OK -- maybe even pretty well -- but definitely not great in the final quarter of 2012. While this morning's negative growth number is an attention grabber, realize it's for last quarter, it's an early guess, and it's contradicted by most other data which point to an economy that is still growing, although perhaps not fast enough.

And finally, a trivia question: When is the last time that the first big hint of bad economic news came from an advance GDP report? Answer: Never.

Image Source: © Rebecca Cook / Reuters
     
 
 




not

Putin’s not-so-excellent spring

Early this year, Vladimir Putin had big plans for an excellent spring: first, constitutional amendments approved by the legislative branch and public allowing him the opportunity to remain in power until 2036, followed by a huge patriotic celebration of the 75th anniversary of the defeat of Nazi Germany. Well, stuff happens—specifically, COVID-19. Putin’s spring has…

       




not

The Arab Spring is 2011, Not 1989

The Arab revolutions are beginning to destroy the cliché of an Arab world incapable of democratic transformation. But another caricature is replacing it: according to the new narrative, the crowds in Cairo, Benghazi or Damascus, mobilized by Facebook and Twitter, are the latest illustration of the spread of Western democratic ideals; and while the “rise…

       




not

Break up the big banks? Not quite, here’s a better option.


Neel Kashkari, the newly appointed President of the Federal Reserve Bank of Minneapolis, is super-smart with extensive experience in the financial industry at Goldman Sachs and then running the government’s TARP program, but his call to break up the big banks misses the mark.

Sure, big banks, medium-sized banks and small banks all contributed to the devastating financial crisis, but so did the rating agencies and the state-regulated institutions (mostly small) that originated many of the bad mortgages.  It was vital that regulation be strengthened to avoid a repetition of what happened – and it has been.  There should never again be a situation where policymakers are faced with either bailing out failing institutions or letting them fail and seeing financial panic spread.

That’s why the Dodd-Frank Act gave the authorities a new tool to avoid that dilemma titled “Orderly Liquidation Authority,” which gives them the ability to fail a firm but sustain the key parts whose failure might cause financial instability.  Kashkari thinks that the authorities will not want to exercise this option in a crisis because they will be fearful of the consequences of imposing heavy losses on the original owners of the largest banks.  It’s a legitimate concern, but he underestimates the progress that has been made in making the orderly liquidation authority workable in practice.  He also underestimates the determination of regulators not to bail out financial institutions from now on.

To make orderly liquidation operational, the Federal Deposit Insurance Corporation (FDIC) devised something called the “single point of entry” approach, or SPOE, which provides a way of dealing with large failing banks.  The bank holding company is separated from the operating subsidiaries and takes with it all of the losses, which are then imposed on the shareholders and unsecured bond holders of the original holding company, and not on the creditors of the critical operating subs and not on  taxpayers.  The operating subsidiaries of the failing institution are placed into a new bank entity, and they are kept open and operating so that customers can still go into their bank branch or ATM and get their money, and the bank can still make loans to support household and business spending or the investment bank can continue to help businesses and households raise funds in securities markets.  The largest banks also have foreign subsidiaries and these too would stay open to serve customers in Brazil or Mexico.

This innovative approach to failing banks is not magic, although it is hard for most people to understand.  However, the reason that Kashkari and other knowledgeable officials have not embraced SPOE is that they believe the authorities will be hesitant to use it and will try to find ways around it.  When a new crisis hits, the argument goes, government regulators will always bail out the big banks.

First, let’s get the facts straight about the recent crisis.  The government did step in to protect the customers of banks of all sizes as well as money market funds.  In the process, they also protected most bondholders, and people who had lent money to the troubled institutions, including the creditors of Bear Stearns, a broker dealer, and AIG, an insurance company.  This was done for a good reason because a collapse in the banking and financial system more broadly would have been even worse if markets stopped lending to them.  Shareholders of banks and other systemically important institutions lost a lot of money in the crisis, as they should have.  The CEOs lost their jobs, as they should have (although not their bonuses).  Most bondholders were protected because it was an unfortunate necessity.

As a result of Dodd-Frank rules the situation is different now from what it was in 2007.  Banks are required to hold much more capital, meaning that there is more shareholder equity in the banks.  In addition, banks must hold long-term unsecured debt, bonds that essentially become a form of equity in the event of a bank failure.  It is being made clear to markets that this form of lending to banks will be subject to losses in the event the bank fails—unlike in 2008.  Under the new rules, both the owners of the shares of big banks and the holders of their unsecured bonds have a lot to lose if the bank fails, providing market discipline and a buffer that makes it very unlikely indeed that taxpayers would be on the hook for losses.

The tricky part is to understand the situation facing the operating subsidiaries of the bank holding company — the parts that are placed into a new bank entity and remain open for business.  The subsidiaries may in fact be the part of the bank that caused it to fail in the first place, perhaps by making bad loans or speculating on bad risks.  Some of these subsidiaries may need to be broken off and allowed to fail along with the holding company—provided that can be done without risking spillover to the economy.  Other parts may be sold separately or wound down in an orderly way.  In fact the systemically important banks are required to submit “living wills” to the FDIC and the Federal Reserve that will enable the critical pieces of a failing bank to be separated from the rest.

It is possible that markets will be reluctant to lend money to the new entity but the key point is that this new entity will be solvent because the losses, wherever they originated, have been taken away and the new entities recapitalized by the creditors of the holding company that have been “bailed in.”   Even if it proves necessary for the government to lend money to the newly formed bank entity, this can be done with reasonable assurance that the loans will be repaid with interest.  Importantly, it can be done through the orderly liquidation authority and would not require Congress to pass another TARP, the very unpopular fund that was used to inject capital into failing institutions.

There are proposals to enhance the SPOE approach by creating a new chapter of the bankruptcy code, so that a judge would control the failure process for a big bank and this could ensure there is no government bailout.  I support these efforts to use bankruptcy proceedings where possible, although I am doubtful if the courts could handle a severe crisis with multiple failures of global financial institutions.  But regardless of whether failing financial institutions are resolved through judicial proceedings or through the intervention of the FDIC (as specified under Title II of Dodd-Frank) the new regulations guaranty that shareholders and unsecured bondholders bear the losses so that the parts of the firm that are essential for keeping financial services going in the economy are kept alive.  That should assure the authorities that bankruptcy or resolution can be undertaken while keeping the economy relatively safe.

The Federal Reserve regulates the largest banks and it is making sure that the bigger the bank, the greater is the loss-absorbing buffer it must hold—and it will be making sure that systemically important nonbanks also have extra capital and can be resolved in an orderly manner.  Once that process is complete, it can be left to the market to decide whether or not it pays to be a big bank.  Regulators do not have to break up the banks or figure out how that would be done without disrupting the financial system.


Editor's note: This piece originally appeared in Bloomberg Government

Publication: Bloomberg Government
Image Source: © Keith Bedford / Reuters
      
 
 




not

The real reason your paycheck is not where it could be


For more than a decade, the economy’s rate of productivity growth has been dismal, which is bad news for workers since their incomes rise slowly or not at all when this is the case. Economists have struggled to understand why American productivity has been so weak. After all, with all the information technology innovations that make our lives easier like iPhones, Google, and Uber, why hasn’t our country been able to work more productively, giving us either more leisure time, or allowed us to get more done at work and paid more in return?

One answer often given is that the government statisticians must be measuring something wrong – notably, the benefits of Google and all the free stuff we can now access on our phones, tablets and computers. Perhaps government statisticians just couldn’t figure out how to include those new services in a meaningful way into the data?

A new research paper by Fed economists throws cold water on that idea. They think that free stuff like Facebook should not be counted in GDP, or in measures of productivity, because consumers do not pay for these services directly; the costs of providing them are paid for by advertisers. The authors point out that free services paid for by advertising are not new; for example, when television broadcasting was introduced it was provided free to households and much of it still is.

The Fed economists argue that free services like Google are a form of “consumer surplus,” defined as the value consumers place on the things they buy that is over and above the price they have paid. Consumer surplus has never been included in past measures of GDP or productivity, they point out. Economist Robert Gordon, who commented on the Fed paper at the conference where it was presented, argued that even if consumer surplus were to be counted, most of the free stuff such as search engines, e-commerce, airport check-in kiosks and the like was already available by 2004, and hence would not explain the productivity growth slowdown that occurred around that time.

The Fed economists also point out that the slowdown in productivity growth is a very big deal. If the rate of growth achieved from 1995 to 2004 had continued for another decade, GDP would have been $3 trillion higher, the authors calculate. And the United States is not alone in facing weak productivity; it is a problem for all developed economies. It is hard to believe that such a large problem faced by so many countries could be explained by errors in the way GDP and productivity are measured.

Even though I agree with the Fed authors that the growth slowdown is real, there are potentially serious measurement problems for the economy that predate the 2004 slowdown.

Health care is the most important example. It amounts to around 19% of GDP and in the official accounts there has been no productivity growth at all in this sector over many, many years. In part that may reflect inefficiencies in health care delivery, but no one can doubt that the quality of care has increased. New diagnostic and scanning technologies, new surgical procedures, and new drugs have transformed how patients are treated and yet none of these advances has been counted in measured productivity data. The pace of medical progress probably was just as fast in the past as it is now, so this measurement problem does not explain the slowdown. Nevertheless, trying to obtain better measures of health care productivity is an urgent task. The fault is not with the government’s statisticians, who do a tremendous job with very limited resources. The fault lies with those in Congress who undervalue good economic statistics.

Gordon, in his influential new book The Rise and Fall of American Growth, argues that the American engine of innovation has largely run its course. The big and important innovations are behind us and future productivity growth will be slow. My own view is that the digital revolution has not nearly reached an end, and advances in materials science and biotechnology promise important innovations to come. Productivity growth seems to go in waves and is impossible to forecast, so it is hard to say for sure if Gordon is wrong, but I think he is.

Fortune reported in June 2015 that 70% of its top 500 CEOs listed rapid technological change as their biggest challenge. I am confident that companies will figure out the technology challenge, and productivity growth will get back on track, hopefully sooner rather than later.


Editor’s note: This piece originally appeared in Fortune.

Publication: Fortune
Image Source: © Jessica Rinaldi / Reuters
      
 
 




not

Not just for the professionals? Understanding equity markets for retail and small business investors


Event Information

April 15, 2016
9:00 AM - 12:30 PM EDT

The Brookings Institution
Falk Auditorium
1775 Massachusetts Ave., N.W.
Washington, DC 20036

Register for the Event

The financial crisis is now eight years behind us, but its legacy lingers on. Many Americans are concerned about their financial security and are particularly worried about whether they will have enough for retirement. Guaranteed benefit pensions are gradually disappearing, leaving households to save and invest for themselves. What role could equities play for retail investors?

Another concern about the lingering impact of the crisis is that business investment and overall economic growth remains weak compared to expectations. Large companies are able to borrow at low interest rates, yet many of them have large cash holdings. However, many small and medium sized enterprises face difficulty funding their growth, paying high risk premiums on their borrowing and, in some cases, being unable to fund investments they would like to make. Equity funding can be an important source of growth financing.

On Friday, April 15, the Initiative on Business and Public Policy at Brookings examined what role equity markets can play for individual retirement security, small business investment and whether they can help jumpstart American innovation culture by fostering the transition from startups to billion dollar companies.

You can join the conversation and tweet questions for the panelists at #EquityMarkets.

Video

Audio

Transcript

Event Materials

      
 
 




not

Donald Trump is spreading racism — not fighting terrorism

       




not

Not his father’s Saudi Arabia

       




not

Class Notes: Unequal Internet Access, Employment at Older Ages, and More

This week in Class Notes: The digital divide—the correlation between income and home internet access —explains much of the inequality we observe in people's ability to self-isolate. The labor force participation rate among older Americans and the age at which they claim Social Security retirement benefits have risen in recent years. Higher minimum wages lead to a greater prevalence…

       




not

Class Notes: Harvard Discrimination, California’s Shelter-in-Place Order, and More

This week in Class Notes: California's shelter-in-place order was effective at mitigating the spread of COVID-19. Asian Americans experience significant discrimination in the Harvard admissions process. The U.S. tax system is biased against labor in favor of capital, which has resulted in inefficiently high levels of automation. Our top chart shows that poor workers are much more likely to keep commuting in…

       




not

Argentina must not waste its crisis

If you leave Argentina and come back 20 days later, according to a tragically apt joke, you’ll find everything is different, but if you come back after 20 years, you’ll find that everything is the same. Will the country’s likely next president, Alberto Fernández, finally manage to erase that punch line? According to the World Bank, since…

       




not

Targeted Improvements in Crisis Resolution, Not a New Bretton Woods

The current crisis reveals two major flaws in the world’s crisis-resolution mechanisms: (i) funds available to launch credible rescue operations are insufficient, and (ii) national crisis responses have negative spillovers. One solution is to emulate the EU’s enhanced cooperation solution at the global level, with the IMF ensuring that the rules are respected. Big global…

       




not

Cyprus as Another Euro-Solution


After 10 hectic days, Cypriots will return to economic life. The price, however, is an inevitable and costly adjustment plan. But contrary to many predictions, the eurozone and the Cypriot government have been able to find a solution in less than 10 days. Moreover, the eurozone has avoided yet another financial hurdle that, despite its small size, was described as having the potential to start another acute phase of the euro crisis.

The management of the eurozone crisis over the last three years has proven to be extremely tortuous. It remains so, and this episode will certainly not be the last. However, observers might also point to how the management by congressional leaders of the U.S. fiscal and deficit problems reveals similar political complexities. Could both be the inevitable result of a democratic, diverse, continental political constituency?

What people need to understand about the eurozone is its continuous willingness to ensure the future of the euro, and its (until now) proven capacity to find compromises despite diverging national interests.

Cyprus has been recognized for months as a ticking bomb within the eurozone, mixing a hypertrophied banking system (that produced jobs and wealth for Cypriots) with huge Russian deposits and suspected money laundering.

Cyprus has been recognized for months as a ticking bomb within the eurozone, mixing a hypertrophied banking system (that produced jobs and wealth for Cypriots) with huge Russian deposits and suspected money laundering. It seems that this had become Cyprus’s most important comparative advantage. The fight against money laundering is supposed to be a great cause of the OECD countries, and it is surprising to note that this aspect did not receive appropriate weight when commenting on the unconventional tools used by the troika to design its plan. The Cypriot banking system is not like the average banking system of Southern Europe. It is a case in itself and deserves a solution of its own.

The “success story” of Cyprus was destroyed by the haircut on Greek bonds; Cypriot banks hold massive amounts of Greek bonds on behalf of their foreign clients. Incidentally, this says a lot about the prowess of this supposed “international financial center” and the awareness of its clients. For many reasons, mostly the country’s democratic process, the active search for a solution to problems in Cyprus had been postponed for months until Saturday, March 16, when an agreement was reached between the newly-elected president of Cyprus, the eurozone governments, and the troika. On that date, every old prejudice about the mismanagement of the eurozone crisis, that had been shelved for the last year, suddenly resurfaced with a new torrent: of criticisms (an ill-conceived plan); of denunciations (a crisis of stupidity); of rejection (Europe is for people, not for Germany); of financial horrors (inevitable propagation of the Cypriot bank run); and finally of doomed forecasts (be alert, the breakup is coming).

Yet one week later, it is interesting to visit the control room and watch the radar screens:

  • The agreement? Better designed and operational as of Monday, March 25; 
  •  Bank runs propagation? No sign (even in the London branches of the two Cypriot banks);
  • European periphery bond market? A definitely strong first quarter;
  • Stock markets? Stable;
  • Exchange markets? Stable.

However, we should not consider this summary to mean that this new episode in the eurozone saga has been more efficiently managed than the previous ones. Definitely not!

Two examples among many explain why this is not the case. First, the idea to tax every bank account whatever its amount was not a product of “German stupidity” but reflects a demand from the Cypriot president, who was willing to preserve the image of the island as a financial center; as if the confidence of dirty money could be a sustainable comparative advantage for Cyprus! The stupefying thing is that the other euro governments accepted this clause even though it was financially dangerous and certain to be rejected by the populace and its representatives. In following the relief produced by the substance of the new agreement, the Dutch finance minister and chairman of the Eurogroup announced that the Cypriot treatment was great news because it showed that bank depositors may be expected to contribute to future bailout packages. However this is explosive and potentially as damaging as the PSI initiative adopted at Deauville. There was immediate backtracking but this reminds us that the whole process remains fragile. All this being properly considered, we should examine the ongoing euro crisis along a different narrative.

And after having described the situation in Cyprus as potential chaos in the waiting, experts now explain the absence of collateral effects by referring to the July 2012 famous commitment of Mario Draghi.

What the above mentioned facts demonstrate is that markets and people outside of Cyprus adopted (at least until the Dutch minister’s proclamation) a much calmer view than specialized commentators. And after having described the situation in Cyprus as potential chaos in the waiting, experts now explain the absence of collateral effects by referring to the July 2012 famous commitment of Mario Draghi. This is at best an excuse for not exploring other explanations and at worst a superstition for placing too much power in his mouth. Rather, two broader facts should be emphasized:

  • First, looking outside the eurozone, the euro has remained as attractive an international currency as before all the vicissitudes of the sovereign debt crisis despite all the aggressiveness on part of the international financial press. The exchange rate with the dollar constantly remained close to 1.3— a rate which reveals an over-valuation of the euro; such stability is surprising given all the daily announcements of its forthcoming collapse. This fact, which has never received proper attention, at the very least proves that the euro has always remained as attractive as the dollar. After all the drama we have gone through, there was little chance that the Cypriot episode will change this global perception of the euro.

  • Second, within the eurozone, there is an underestimated willingness to stick to the euro as the currency of the European continent. Austerity measures are never popular and governments that adopt them have been punished in Greece, Spain, France and Italy. Nevertheless, this is the natural product of democracy, and when it comes to the explicit question— “do you prefer to stay in the eurozone, with its mechanisms and constraints, or move on your own?”— the popular answer everywhere has been “we stay”. This is what popular votes have proven in Ireland, Greece and Spain, as well as in Germany where local elections have regularly promoted euro-friendly candidates.

So what can we conclude from the recent crisis in Cyprus? The first conclusion is that Cyprus will pay a high price for exiting a dramatic situation and securing access to eurozone support; no other feasible deal was better than that one at that particular moment. Second, we have witnessed once again the willingness of the eurozone to stay the course, and its ability to design imperfect but feasible compromises, which is not so bad when compared to what’s going on in Washington. In brief, this is another Euro-solution. However, Cyprus is certainly not the last challenge confronting the governments and people of the eurozone. In that sense, the most problematic lesson from this chaotic week is not financial but political. The future of Europe more and more lies in the hands of Germany and there is no place here for accusing the Germans of egoism. Financially speaking, they have moved forward at every step during the last three years and they are the ones that repeatedly take the biggest risks. There is no question that Germany has a prominent voice and that it defends its financial security before entering into an agreement. This is what should have been expected and this is what we have seen with what happened in Cyprus. Looking forward, the bigger problem facing the eurozone is the urgent need to design a macroeconomic policy that will spur a return to growth for the region. On this issue, there is still no visible Euro-solution and that could prove to be the biggest risk facing Europe.

Authors

      
 
 




not

Why the Turkish election results are not all bad news (just mostly)


This weekend’s election results in Turkey were a surprise to the vast majority of Turkish pollsters and pundits, myself included. The ruling Justice and Development Party (AKP) won nearly 50 percent of the popular vote. The party can now form a single-party government, even if it doesn’t have the supermajority necessary to remake the Turkish constitution. What happened?

Now I see clearly

As with much in life, the result does make sense in hindsight. Prior to the June 7 election, President Recep Tayyip Erdoğan and the AKP leadership had supported a Kurdish peace process, in part in the hope of gaining Kurdish votes. In that election, however, not only did the AKP fail to win new Kurdish votes, but support for the Nationalist Action Party (MHP)—a far-right Turkish nationalist party—swelled, apparently out of frustration among nationalist Turks with the AKP-led peace process with the Kurds. In other words, the AKP had the worst of both worlds.

Erdoğan and the AKP leadership, recognizing the political problem this posed for them, allowed the peace process to collapse amid mounting instability driven by the Syrian civil war. This, combined with disillusionment with the MHP leadership due to their perceived unwillingness to form a coalition government, drove about two million MHP voters to the AKP this weekend. The exodus shows, in a sense, what close substitutes the two parties can be among a more nationalist voting bloc.

The controlled chaos that resulted from the collapse of the peace process—combined with the escalating refugee crisis, the fear of ISIS attacks, and the struggling economy—helped the government politically. Voters evidently recalled that it had been the AKP that brought the country out of the very tough times of the 1990s.  

In contrast, the opposition parties seem to lack leadership and appear to promise only internal squabbles and indecisiveness. Craving security and stability, voters have now turned to the one party that appears to have the strength to provide it. In that sense, Erdoğan’s nationalist gambit—which was actually a well-conceived series of political maneuvers—worked. Even some one million conservative Kurdish voters returned to the AKP.

These voters perhaps did not notice the irony that the government had also engineered the instability they feared. In part, this success derives from government’s control over the media. These elections may have been free, in the sense that Turkish voters can cast a ballot for the candidates they want. But they were not fair. The state maintained tight control over traditional and social media alike. Freedom House and the Committee to Protect Journalists, among others, have cast doubt on Turkey’s press freedom credentials. Real opposition voices are difficult for media publish or voters to see on television. Thus, for example, Selahattin Demirtaş, the leader of the pro-Kurdish Peoples' Democratic Party (HDP) and the most charismatic opposition politician in Turkey, had essentially no air time during the campaign.

Not all bad news

There are some important upsides to the election results. For one, HDP again passed the 10 percent threshold to remain in parliament. That will help mitigate—though hardly erase—the polarization that grips the country, and will hopefully make government reconsider its abandonment of the Kurdish peace process.

More significantly, the AKP does not have what it needs to convert Turkey’s government structure into a presidential system, which would be a bad move for the country. The election results will undoubtedly revitalize Erdoğan’s push for a presidential regime in Turkey. But that requires changing the constitution, and the AKP did not achieve the supermajority that it would need to do that on its own.

Critically, changing to a presidential system will require some support from the opposition and even more importantly popular support via a referendum. As political strategists around the world have learned, people tend not to vote on the actual referendum item, per se, but based on more general opinions of their leadership. So to win a referendum on the presidential system, Erdoğan and his AKP colleagues would need to show improvements in the economy, in the security situation, on the Kurdish issue, on Syrian refugees, and on national stability more generally. Instability in Turkey, particularly the renewal of violence in the Kurdish region, will deter investment and deepen the economic slump throughout the country.

With its new majority, AKP leaders are now in a position of strength to negotiate with the HDP over Kurdish issues. The refugee crisis also means the government also has more leverage with the EU. If it chooses to use its strength to reach positive agreements on those fronts, the outcomes could be very good for the Turkish people.

To actually win a referendum on the presidential system, Erdoğan would have to work to depolarize his country. While the presidential system itself would not be good for Turkey, the process of getting there might be.

      
 
 




not

Turkey cannot effectively fight ISIS unless it makes peace with the Kurds


Terrorist attacks with high casualties usually create a sense of national solidarity and patriotic reaction in societies that fall victim to such heinous acts. Not in Turkey, however. Despite a growing number of terrorist attacks by the so-called Islamic State on Turkish soil in the last 12 months, the country remains as polarized as ever under strongman President Recep Tayyip Erdogan.

In fact, for two reasons, jihadist terrorism is exacerbating the division. First, Turkey's domestic polarization already has an Islamist-versus-secularist dimension. Most secularists hold Erdogan responsible for having created domestic political conditions that turn a blind eye to jihadist activities within Turkey.

It must also be said that polarization between secularists and Islamists in Turkey often fails to capture the complexity of Turkish politics, where not all secularists are democrats and not all Islamists are autocrats. In fact, there was a time when Erdogan was hailed as the great democratic reformer against the old secularist establishment under the guardianship of the military.

Yet, in the last five years, the religiosity and conservatism of the ruling Justice and Development Party, also known by its Turkish acronym AKP, on issues ranging from gender equality to public education has fueled the perception of rapid Islamization. Erdogan's anti-Western foreign policy discourse -- and the fact that Ankara has been strongly supportive of the Muslim Brotherhood in the wake of the Arab Spring -- exacerbates the secular-versus-Islamist divide in Turkish society.

Erdogan doesn't fully support the eradication of jihadist groups in Syria.

The days Erdogan represented the great hope of a Turkish model where Islam, secularism, democracy and pro-Western orientation came together are long gone. Despite all this, it is sociologically more accurate to analyze the polarization in Turkey as one between democracy and autocracy rather than one of Islam versus secularism.

The second reason why ISIS terrorism is exacerbating Turkey's polarization is related to foreign policy. A significant segment of Turkish society believes Erdogan's Syria policy has ended up strengthening ISIS. In an attempt to facilitate Syrian President Bashar Assad's overthrow, the AKP turned a blind eye to the flow of foreign volunteers transiting Turkey to join extremist groups in Syria. Until last year, Ankara often allowed Islamists to openly organize and procure equipment and supplies on the Turkish side of the Syria border.

Making things worse is the widely held belief that Turkey's National Intelligence Organization, or MİT, facilitated the supply of weapons to extremist Islamist elements amongst the Syrian rebels. Most of the links were with organizations such as Jabhat al-Nusra, Ahrar al-Sham and Islamist extremists from Syria's Turkish-speaking Turkmen minority.

He is trying to present the PKK as enemy number one.

Turkey's support for Islamist groups in Syria had another rationale in addition to facilitating the downfall of the Assad regime: the emerging Kurdish threat in the north of the country. Syria's Kurds are closely linked with Turkey's Kurdish nemesis, the Kurdistan Workers' Party, or PKK, which has been conducting an insurgency for greater rights for Turkey's Kurds since 1984.

On the one hand, Ankara has hardened its stance against ISIS by opening the airbase at Incirlik in southern Turkey for use by the U.S-led coalition targeting the organization with air strikes. However, Erdogan doesn't fully support the eradication of jihadist groups in Syria. The reason is simple: the Arab and Turkmen Islamist groups are the main bulwark against the expansion of the de facto autonomous Kurdish enclave in northern Syria. The AKP is concerned that the expansion and consolidation of a Kurdish state in Syria would both strengthen the PKK and further fuel similar aspirations amongst Turkey's own Kurds.

Will the most recent ISIS terrorist attack in Istanbul change anything in Turkey's main threat perception? When will the Turkish government finally realize that the jihadist threat in the country needs to be prioritized? If you listen to Erdogan's remarks, you will quickly realize that the real enemy he wants to fight is still the PKK. He tries hard after each ISIS attack to create a "generic" threat of terrorism in which all groups are bundled up together without any clear references to ISIS. He is trying to present the PKK as enemy number one.

Only after a peace process with Kurds will Turkey be able to understand that ISIS is an existential threat to national security.

Under such circumstances, Turkish society will remain deeply polarized between Islamists, secularists, Turkish nationalists and Kurdish rebels. Terrorist attacks, such as the one in Istanbul this week and the one in Ankara in July that killed more than 100 people, will only exacerbate these divisions.

Finally, it is important to note that the Turkish obsession with the Kurdish threat has also created a major impasse in Turkish-American relations in Syria. Unlike Ankara, Washington's top priority in Syria is to defeat ISIS. The fact that U.S. strategy consists of using proxy forces such as Syrian Kurds against ISIS further complicates the situation.

There will be no real progress in Turkey's fight against ISIS unless there is a much more serious strategy to get Ankara to focus on peace with the PKK. Only after a peace process with Kurds will Turkey be able to understand that ISIS is an existential threat to national security.

This piece was originally posted by The Huffington Post.

Publication: The Huffington Post
Image Source: © Murad Sezer / Reuters
      
 
 




not

Why Italy cannot exit the euro

The rise of strong euroskeptic parties in Italy in recent years had raised serious concerns about whether the country will permanently remain in the euro area. Although anti-euro rhetoric is now more muted, the fear of an “Italexit” still lingers in the economy. Italy’s notoriously high public debt is generally considered sustainable and not at…

       




not

Detroit Needs a Selloff, Not a Bailout

Robert Crandall and Clifford Winston discuss a proposal for automakers they think will cost taxpayers less and, in the long run, be more beneficial to labor and the overall economy than either a straight bailout or bankruptcy.

      
 
 




not

Despite Predictions, BCRA Has Not Been a Democratic 'Suicide Bill'

During debates in Congress and in the legal battles testing its constitutionality, critics of the Bipartisan Campaign Reform Act of 2002 imagined a host of unanticipated and debilitating consequences. The law's ban on party soft money and the regulation of electioneering advertising would, they warned, produce a parade of horribles: A decline in political speech protected by the First Amendment, the demise of political parties, and the dominance of interest groups in federal election campaigns.

The forecast that attracted the most believers — among politicians, journalists, political consultants, election-law attorneys and scholars — was the claim that Democrats would be unable to compete against Republicans under the new rules, primarily because the Democrats' relative ability to raise funds would be severely crippled. One year ago, Seth Gitell in The Atlantic Monthly summarized this view and went so far as to call the new law "The Democratic Party Suicide Bill." Gitell quoted a leading Democratic Party attorney, who expressed his private view of the law as "a fascist monstrosity." He continued, "It is grossly offensive ... and on a fundamental level it's horrible public policy, because it emasculates the parties to the benefit of narrow-focus special-interest groups. And it's a disaster for the Democrats. Other than that, it's great."

The core argument was straightforward. Democratic Party committees were more dependent on soft money — unlimited contributions from corporations, unions and individuals — than were the Republicans. While they managed to match Republicans in soft-money contributions, they trailed badly in federally limited hard-money contributions. Hence, the abolition of soft money would put the Democrats at a severe disadvantage in presidential and Congressional elections.

In addition, the argument went, by increasing the amount an individual could give to a candidate from $1,000 to $2,000, the law would provide a big financial boost to President Bush, who would double the $100 million he raised in 2000 and vastly outspend his Democratic challenger. Finally, the ban on soft money would weaken the Democratic Party's get-out-the-vote efforts, particularly in minority communities, while the regulation of "issue ads" would remove a potent electoral weapon from the arsenal of labor unions, the party's most critical supporter.

After 18 months of experience under the law, the fundraising patterns in this year's election suggest that these concerns were greatly exaggerated. Money is flowing freely in the campaign, and many voices are being heard. The political parties have adapted well to an all-hard-money world and have suffered no decline in total revenues. And interest groups are playing a secondary role to that of the candidates and parties.

The financial position of the Democratic party is strikingly improved from what was imagined a year ago. Sen. John Kerry (D-Mass.), who opted out of public funding before the Iowa caucuses, will raise more than $200 million before he accepts his party's nomination in Boston. The unusual unity and energy in Democrats' ranks have fueled an extraordinary flood of small donations to the Kerry campaign, mainly over the Internet. These have been complemented by a series of successful events courting $1,000 and $2,000 donors.

Indeed, since Kerry emerged as the prospective nominee in March, he has raised more than twice as much as Bush and has matched the Bush campaign's unprecedented media buys in battleground states, while also profiting from tens of millions of dollars in broadcast ads run by independent groups that are operating largely outside the strictures of federal election law.

The Democratic national party committees have adjusted to the ban on soft money much more successfully than insiders had thought possible. Instead of relying on large soft-money gifts for half of their funding, Democrats have shown a renewed commitment to small donors and have relied on grassroots supporters to fill their campaign coffers. After the 2000 election, the Democratic National Committee had 400,000 direct-mail donors; today the committee has more than 1.5 million, and hundreds of thousands more who contribute over the Internet.

By the end of June, the three Democratic committees had already raised $230 million in hard money alone, compared to $227 million in hard and soft money combined at this point in the 2000 election cycle. They have demonstrated their ability to replace the soft money they received in previous elections with new contributions from individual donors.

Democrats are also showing financial momentum as the election nears, and thus have been gradually reducing the Republican financial advantage in both receipts and cash on hand. In 2003, Democrats trailed Republicans by a large margin, raising only $95 million, compared to $206 million for the GOP. But in the first quarter of this year, Democrats began to close the gap, raising $50 million, compared to $82 million for Republicans. In the most recent quarter, they narrowed the gap even further, raising $85 million, compared to the Republicans' $96 million.

Democrats are now certain to have ample funds for the fall campaigns. Although they had less than $20 million in the bank (minus debts) at the beginning of this year, they have now banked $92 million. In the past three months, Democrats actually beat Republicans in generating cash — $47 million, compared to $31 million for the GOP.

The party, therefore, has the means to finance a strong coordinated and/or independent-spending campaign on behalf of the presidential ticket, while Congressional committees have the resources they need to play in every competitive Senate and House race, thanks in part to the fundraising support they have received from Members of Congress.

Moreover, FEC reports through June confirm that Democratic candidates in those competitive Senate and House races are more than holding their own in fundraising. They will be aided by a number of Democratic-leaning groups that have committed substantial resources to identify and turn out Democratic voters on Election Day.

Democrats are highly motivated to defeat Bush and regain control of one or both houses of Congress. BCRA has not frustrated these efforts. Democrats are financially competitive with Republicans, which means the outcome will not be determined by a disparity of resources. Put simply, the doomsday scenario conjured up by critics of the new campaign finance law has not come to pass.

Publication: Roll Call
     
 
 




not

Why AI systems should disclose that they’re not human

       




not

Innovation Is Not an Unqualified Good


Innovation is the driver of long-term economic growth and a key ingredient for improvements in healthcare, safety, and security, not to mention those little comforts and conveniences to which we have grown so accustomed. But innovation is not an unqualified good; it taxes society with costs.

The market system internalizes only a portion of the total costs of innovation. Other costs, however, are not included in market prices. Among the most important sources for those unaccounted costs are creative destruction, externalities, and weak safeguards for unwanted consequences.

Creative Destruction and Innovation

Schumpeter described creative destruction as the process by which innovative entrepreneurs outcompete older firms who unable to adapt to a new productive platform go out of business, laying off their employees and writing off their productive assets. Innovation, thus, also produces job loss and wealth destruction. Externalities are side effects with costs not priced in the marketplace such as environmental degradation and pollution. While externalities are largely invisible in the accounting books, they levy very real costs to society in terms of human health and increased vulnerability to environmental shocks. In addition, new technologies are bound to have unwanted deleterious effects, some of which are harmful to workers and consumers, and often, even to third parties not participating in those markets. Yet, there are little financial or cultural incentives for innovators to design new technologies with safeguards against those effects.

Indeed, innovation imposes unaccounted costs and those costs are not allocated in proportion of the benefits. Nothing in the market system obligates the winners of creative destruction to compensate the unemployed of phased-out industries, nor mandates producers to compensate those shouldering the costs of externalities, nor places incentives to invest in preventing unwanted effects in new production processes and new products. It is the role of policy to create the appropriate incentives for a fair distribution of those social costs. As a matter of national policy we must continue every effort to foster innovation, but we must do so recognizing the trade-offs.

Strengthening the Social Safety Net

Society as a whole benefits from creative destruction; society as a whole must then strengthen the safety net for the unemployed and double up efforts to help workers retrain and find employment in emerging industries. Regulators and industry will always disagree on many things but they could agree to collaborate on a system of regulatory incentives to ease transition to productive platforms with low externality costs. Fostering innovation should also mean promoting a culture of anticipation to better manage unwanted consequences.

Let’s invest in innovation with optimism, but let’s be pragmatic about it. To reap the most net social benefit from innovation, we must work on two fronts, to maximize benefits and to minimize the social costs, particularly those costs not traditionally accounted. The challenge for policymakers is to do it fairly and smartly, creating a correspondence of benefits and costs, and not unnecessarily encumbering innovative activity.

Commentary published in The International Economy magazine, Spring 2014 issue, as part of a symposium of experts responding to the question: Does Innovation Lead to prosperity for all?

Image Source: © Suzanne Plunkett / Reuters
     
 
 




not

Why should I buy a new phone? Notes on the governance of innovation


A review essay of “Governance of Socio-technical Systems: Explaining Change”, edited by Susana Borrás and Jakob Edler (Edward Elgar, 2014, 207 pages).

Phasing-out a useful and profitable technology

I own a Nokia 2330; it’s a small brick phone that fits comfortably in the palm of my hand. People have feelings about this: mostly, they marvel at my ability to survive without a smart-phone. Concerns go beyond my wellbeing; once a friend protested that I should be aware of the costs I impose onto my friends, for instance, by asking them for precise directions to their houses. Another suggested that I cease trying to be smarter than my phone. But my reason is simple: I don’t need a smart phone. Most of the time, I don’t even need a mobile phone. I can take and place calls from my home or my office. And who really needs a phone during their commute? Still, my device will meet an untimely end. My service provider has informed me via text message that it will phase out all 2G service and explicitly encouraged me to acquire a 3G or newer model. 

There is a correct if simplistic explanation for this announcement: my provider is not making enough money with my account and should I switch to a newer device, they will be able to sell me a data plan. The more accurate and more complex explanation is that my mobile device is part of a communications system that is integrated to other economic and social systems. As those other systems evolve, my device is becoming incompatible with them; my carrier has determined that I should be integrated.

The system integration is easy to understand from a business perspective. My carrier may very well be able to make a profit keeping my account as is, and the accounts of the legion of elderly and low-income customers who use similar devices, and still they may not find it advantageous in the long run to allow 2G devices in their network. To understand this business strategy, we need to go back no farther than the introduction of the iPhone, which in addition to being the most marketable mobile phone set a new standard platform for mobile devices. Its introduction accelerated a trend underway in the core business of carriers: the shift from voice communication to data streaming because smart phones can support layers of overlapping services that depend on fast and reliable data transfer. These services include sophisticated log capabilities, web search, geo-location, connectivity to other devices, and more recently added bio-monitoring. All those services are part of systems of their own, so it makes perfect business sense for carriers to seamlessly integrate mobile communications with all those other systems. Still, the economic rationale explains only a fraction of the systems integration underway.

The communication system of mobile telephony is also integrated with regulatory, social, and cultural systems. Consider the most mundane examples: It’s hard to imagine anyone who, having shifted from paper-and-pencil to an electronic agenda, decided to switch back afterwards. We are increasingly dependent of GPS services; while it may have once served tourists who did not wish to learn how to navigate a new city, it is now a necessity for many people who without it are lost in their home town. Not needing to remember phone numbers, the time of our next appointment, or how to go back to that restaurant we really liked, is a clear example of the integration of mobile devices into our value systems.

There are coordination efforts and mutual accommodation taking place: tech designers seek to adapt to changing values and we update our values to the new conveniences of slick gadgets. Government officials are engaged in the same mutual accommodation. They are asking how many phone booths must be left in public places, how to reach more people with public service announcements, and how to provide transit information in real-time when commuters need it. At the same time, tech designers are considering all existing regulations so their devices are compliant. Communication and regulatory systems are constantly being re-integrated.

The will behind systems integration

The integration of technical and social systems that results from innovation demands an enormous amount of planning, effort, and conflict resolution. The people involved in this process come from all quarters of the innovation ecology, including inventors, entrepreneurs, financiers, and government officials. Each of these agents may not be able to contemplate the totality of the system integration problem but they more or less understand how their respective system must evolve so as to be compatible with interrelated systems that are themselves evolving.  There is a visible willfulness in the integration task that scholars of innovation call the governance of socio-technical systems.

Introducing the term governance, I should emphasize that I do not mean merely the actions of governments or the actions of entrepreneurs. Rather, I mean the effort of all agents involved in the integration and re-integration of systems triggered by innovation; I mean all the coordination and mutual accommodation of agents from interrelated systems. And there is no single vehicle to transport all the relevant information for these agents. A classic representation of markets suggests that prices carry all the relevant information agents need to make optimal decisions. But it is impossible to project this model onto innovation because, as I suggested above, it does not adhere exclusively to economic logic; cultural and political values are also at stake. The governance task is therefore fragmented into pieces and assigned to each of the participants of the socio-technical systems involved, and they cannot resolve it as a profit-maximization problem. 

Instead, the participants must approach governance as a problem of design where the goal could be characterized as reflexive adaptation. By adaptation I mean seeking to achieve inter-system compatibility. By reflexive I mean that each actor must realize that their actions trigger adaption measures in other systems. Thus, they cannot passively adapt but rather they must anticipate the sequence of accommodations in the interaction with other agents. This is one of the most important aspects of the governance problem, because all too often neither technical nor economic criteria will suffice; quite regularly coordination must be negotiated, which is to say, innovation entails politics.

The idea of governance of socio-technical systems is daunting. How do we even begin to understand it? What kinds of modes of governance exist? What are the key dimensions to understand the integration of socio-technical systems? And perhaps more pressing, who prevails in disputes about coordination and accommodation? Fortunately, Susana Borrás, from the Copenhagen Business School, and Jakob Edler, from the University of Manchester, both distinguished professors of innovation, have collected a set of case studies that shed light on these problems in an edited volume entitled Governance of Socio-technical Change: Explaining Change. What is more, they offer a very useful conceptual framework of governance that is worth reviewing here. While this volume will be of great interest to scholars of innovation—and it is written in scholarly language—I think it has great value for policymakers, entrepreneurs, and all agents involved in a practical manner in the work of innovation.

Organizing our thinking on the governance of change

The first question that Borrás and Edler tackle is how to characterize the different modes of governance. They start out with a heuristic typology across the two central categories: what kinds of agents drive innovation and how the actions of these agents are coordinated. Agents can represent the state or civil society, and actions can be coordinated via dominant or non-dominant hierarchies.

Change led by state actors

Change led by societal actors

Coordination by dominant hierarchies

Traditional deference to technocratic competence: command and control.

Monopolistic or oligopolistic industrial organization.

Coordination by non-dominant hierarchies

State agents as primus inter pares.

More competitive industries with little government oversight.

Source: Adapted from Borrás and Adler (2015), Table 1.2, p. 13.

This typology is very useful to understand why different innovative industries have different dynamics; they are governed differently. For instance, we can readily understand why consumer software and pharmaceuticals are so at odds regarding patent law. The strict (and very necessary) regulation of drug production and commercialization coupled with the oligopolistic structure of that industry creates the need and opportunity to advocate for patent protection; which is equivalent to a government subsidy. In turn, the highly competitive environment of consumer software development and its low level of regulation foster an environment where patents hinder innovation. Government intervention is neither needed nor wanted; the industry wishes to regulate itself.

This typology is also useful to understand why open source applications have gained currency much faster in the consumer segment than the contractor segment of software producers. Examples of the latter is industry specific software (e.g. to operate machinery, the stock exchange, and ATMs) or software to support national security agencies. These contractors demand proprietary software and depend on the secrecy of the source code. The software industry is not monolithic, and while highly innovative in all its segments, the innovation taking place varies greatly by its mode of governance.

Furthermore, we can understand the inherent conflicts in the governance of science. In principle, scientists are led by curiosity and organize their work in a decentralized and organic fashion. In practice, most of science is driven by mission-oriented governmental agencies and is organized in a rigid hierarchical system. Consider the centrality of prestige in science and how it is awarded by peer-review; a system controlled by the top brass of each discipline. There is nearly an irreconcilable contrast between the self-image of science and its actual governance. Using the Borrás-Edler typology, we could say that scientists imagine themselves as citizens of the south-east quadrant while they really inhabit the north-west quadrant.

There are practical lessons from the application of this typology to current controversies. For instance, no policy instrument such as patents can have the same effect on all innovation sectors because the effect will depend on the mode of governance of the sector. This corollary may sound intuitive, yet it really is at variance with the current terms of the debate on patent protection, where assertions of its effect on innovation, in either direction, are rarely qualified.

The second question Borrás and Edler address is that of the key analytical dimensions to examine socio-technical change. To this end, they draw from an ample selection of social theories of change. First, economists and sociologists fruitfully debate the advantage of social inquiry focused on agency versus institutions. Here, the synthesis offered is reminiscent of Herbert Simon’s “bounded rationality”, where the focus turns to agent decisions constrained by institutions. Second, policy scholars as well as sociologists emphasize the engineering of change. Change can be accomplished with discreet instruments such as laws and regulations, or diffused instruments such as deliberation, political participation, and techniques of conflict resolution. Third, political scientists underscore the centrality of power in the adjudication of disputes produced by systems’ change and integration. Borrás and Edler have condensed these perspectives in an analytical framework that boils down to three clean questions: who drives change? (focus on agents bounded by institutions), how is change engineered? (focus on instrumentation), and why it is accepted by society? (focus on legitimacy). The case studies contained in this edited volume illustrate the deployment of this framework with empirical research.

Standards, sustainability, incremental innovation

Arthur Daemmrich (Chapter 3) tells the story of how the German chemical company BASF succeeded marketing the biodegradable polymer Ecoflex. It is worth noting the dependence of BASF on government funding to develop Ecoflex, and on the German Institute for Standardization (DIN), making a market by setting standards. With this technology, BASF capitalized on the growing demand in Germany for biodegradables, and with its intense cooperation with DIN helped establish a standard that differentiate Ecoflex from the competition. By focusing on the enterprise (the innovation agent) and its role in engineering the market for its product by setting standards that would favor them, this story reveals the process of legitimation of this new technology. In effect, the certification of DIN was accepted by agribusinesses that sought to utilize biodegradable products.

If BASF is an example of innovation by standards, Allison Loconto and Marc Barbier (Chapter 4) show the strategies of governing by standards. They take the case of the International Social and Environmental Accreditation and Labelling alliance (ISEAL). ISEAL, an advocate of sustainability, positions itself as a coordinating broker among standard developing organizations by offering “credibility tools” such as codes of conduct, best practices, impact assessment methods, and assurance codes. The organization advocates what is known as the tripartite system regime (TSR) around standards. TSR is a system of checks and balances to increase the credibility of producers complying with standards. The TSR regime assigns standard-setting, certification, and accreditation of the certifiers, to separate and independent bodies. The case illustrates how producers, their associations, and broker organizations work to bestow upon standards their most valuable attribute: credibility. The authors are cautious not to conflate credibility with legitimacy, but there is no question that credibility is part of the process of legitimizing technical change. In constructing credibility, these authors focus on the third question of the framework –legitimizing innovation—and from that vantage point, they illuminate the role of actors and instruments that will guide innovations in sustainability markets.

While standards are instruments of non-dominant hierarchies, the classical instrument of dominant hierarchies is regulation. David Barberá-Tomás and Jordi Molas-Gallart tell the tragic consequences of an innovation in hip-replacement prosthesis that went terribly wrong. It is estimated that about 30 thousand replaced hips failed. The FDA, under the 1976 Medical Device Act, allows incremental improvements in medical devices to go into the market after only laboratory trials, assuming that any substantive innovations have already being tested in regular clinical trials. This policy was designed as an incentive for innovation, a relief from high regulatory costs. However, the authors argue, when products have been constantly improved for a number of years after an original release, any marginal improvement comes at a higher cost or higher risk—a point they refer to as the late stage of the product life-cycle. This has tilted the balance in favor of risky improvements, as illustrated by the hip prosthesis case. The story speaks to the integration of technical and cultural systems: the policy that encourages incremental innovation may alter the way medical device companies assess the relative risk of their innovations, precisely because they focus on incremental improvements over radical ones. Returning to the analytical framework, the vantage point of regulation—instrumentation—elucidates the particular complexities and biases in agents’ decisions.

Two additional case studies discuss the discontinuation of the incandescent light bulb (ILB) and the emergence of translational research, both in Western Europe. The first study, authored by Peter Stegmaier, Stefan Kuhlmann and Vincent R. Visser (Chapter 6), focuses on a relatively smooth transition. There was wide support for replacing ILBs that translated in political will and a market willing to purchase new energy efficient bulbs. In effect, the new technical system was relatively easy to re-integrate to a social system in change—public values had shifted in Europe to favor sustainable consumption—and the authors are thus able to emphasize how agents make sense of the transition. Socio-technical change does not have a unique meaning: for citizens it means living in congruence with their values; for policy makers it means accruing political capital; for entrepreneurs it means new business opportunities. The case by Etienne Vignola-Gagné, Peter Biegelbauer and Daniel Lehner (Chapter 7) offers a similar lesson about governance. My reading of their multi-site study of the implementation of translational research—a management movement that seeks to bridge laboratory and clinical work in medical research—reveals how the different agents involved make sense of this organizational innovation. Entrepreneurs see a new market niche, researchers strive for increasing the impact of their work, and public officials align their advocacy for translation with the now regular calls for rendering publicly funded research more productive. Both chapters illuminate a lesson that is as old as it is useful to remember: technological innovation is interpreted in as many ways as the number of agents that participate in it.

Innovation for whom?

The framework and illustrations of this book are useful for those of us interested in the governance of system integration. The typology of different modes of governance and the three vantage points from which empirical analysis can be deployed are very useful indeed. Further development of this framework should include the question of how political power is redistributed by effect of innovation and the system integration and re-integration that it triggers. The question is pressing because the outcomes of innovation vary as power structures are reinforced or debilitated by the emergence of new technologies—not to mention ongoing destabilizing forces such as social movements. Put another way, the framework should be expanded to explain in which circumstances innovation exacerbates inequality. The expanded framework should probe whether the mutual accommodation is asymmetric across socio-economic groups, which is the same as asking: are poor people asked to do more adapting to new technologies? These questions have great relevance in contemporary debates about economic and political inequality. 

I believe that Borrás and Edler and their colleagues have done us a great service organizing a broad but dispersed literature and offering an intuitive and comprehensive framework to study the governance of innovation. The conceptual and empirical parts of the book are instructive and I look forward to the papers that will follow testing this framework. We need to better understand the governance of socio-technical change and the dynamics of systems integration. Without a unified framework of comparison, the ongoing efforts in various disciplines will not amount to a greater understanding of the big picture. 

I also have a selfish reason to like this book: it helps me make sense of my carrier’s push for integrating my value system to their technical system. If I decide to adapt to a newer phone, I could readily do so because I have time and other resources. But that may not be the case for many customers of 2G devices who have neither the resources nor the inclination to learn to use more complex devices. For that reason alone, I’d argue that this sort of innovation-led systems integration could be done more democratically. Still, I could meet the decision of my carrier with indifference: when the service is disconnected, I could simply try to get by without the darn toy.

Note: Thanks to Joseph Schuman for an engaging discussion of this book with me.

Image Source: © Dominic Ebenbichler / Reuters
      
 
 




not

Podcast: Oil’s not well – How the drastic fall in prices will impact South Asia

       




not

Sharing Threat Intelligence: Necessary but Not Sufficient?

Chairman Johnson, ranking member Carper, members of the Committee, thank you for the opportunity to testify. I am Richard Bejtlich, Chief Security Strategist at FireEye. I am also a nonresident senior fellow at the Brookings Institution, and I am pursuing a PhD in war studies from King’s College London. I began my security career as…

       




not

To hack, or not to hack?

Has President Barack Obama secured relief from Chinese hacking? That is the question on the minds of many following the announcement by the American leader and his counterpart, Chinese President Xi Jinping, on September 25, 2015. On balance, the agreement is a step in the right direction. At best, I would expect it to result…

       




not

Make way for mayors: Why the UK’s biggest power shift may not be the June 8 general election

United Kingdom Prime Minister Theresa May’s call for a snap general election on June 8 has threatened to overshadow another important vote that could reshape the landscape of urban leadership in England. On May 4, voters in six regions, including the large metros of Manchester and Liverpool, will head to the polls for the very…

       




not

The emigration election: Why the EU is not like America

Americans tend to see foreign events through their own domestic lenses. In the case of the European parliamentary elections, the temptation is reinforced by the noisy arrival in Europe of erstwhile Trump advisor Steve Bannon. Bannon has been instrumental in establishing a pan-European alliance of nationalists for a “Common Sense Europe,” including Hungarian Prime Minister…

       




not

Why Bridgegate proves we need fewer hacks, machines, and back room deals, not more


I had been mulling a rebuttal to my colleague and friend Jon Rauch’s interesting—but wrong—new Brookings paper praising the role of “hacks, machines, big money, and back room deals” in democracy. I thought the indictments of Chris Christie’s associates last week provided a perfect example of the dangers of all of that, and so of why Jon was incorrect. But in yesterday’s L.A. Times, he beat me to it, himself defending the political morality (if not the efficacy) of their actions, and in the process delivering a knockout blow to his own position.

Bridgegate is a perfect example of why we need fewer "hacks, machines, big money, and back room deals" in our politics, not more. There is no justification whatsoever for government officials abusing their powers, stopping emergency vehicles and risking lives, making kids late for school and parents late for their jobs to retaliate against a mayor who withholds an election endorsement. We vote in our democracy to make government work, not break. We expect that officials will serve the public, not their personal interests. This conduct weakens our democracy, not strengthens it.

It is also incorrect that, as Jon suggests, reformers and transparency advocates are, in part, to blame for the gridlock that sometimes afflicts our American government at every level. As my co-authors and I demonstrated at some length in our recent Brookings paper, “Why Critics of Transparency Are Wrong,” and in our follow-up Op-Ed in the Washington Post, reform and transparency efforts are no more responsible for the current dysfunction in our democracy than they were for the gridlock in Fort Lee. Indeed, in both cases, “hacks, machines, big money, and back room deals” are a major cause of the dysfunction. The vicious cycle of special interests, campaign contributions and secrecy too often freeze our system into stasis, both on a grand scale, when special interests block needed legislation, and on a petty scale, as in Fort Lee. The power of megadonors has, for example, made dysfunction within the House Republican Caucus worse, not better.

Others will undoubtedly address Jon’s new paper at length. But one other point is worth noting now. As in foreign policy discussions, I don’t think Jon’s position merits the mantle of political “realism,” as if those who want democracy to be more democratic and less corrupt are fluffy-headed dreamers. It is the reformers who are the true realists. My co-authors and I in our paper stressed the importance of striking realistic, hard-headed balances, e.g. in discussing our non-absolutist approach to transparency; alas, Jon gives that the back of his hand, acknowledging our approach but discarding the substance to criticize our rhetoric as “radiat[ing] uncompromising moralism.” As Bridgegate shows, the reform movement’s “moralism" correctly recognizes the corrupting nature of power, and accordingly advocates reasonable checks and balances. That is what I call realism. So I will race Jon to the trademark office for who really deserves the title of realist!

Authors

Image Source: © Andrew Kelly / Reuters
      




not

Refugees: Why Seeking Asylum is Legal and Australia’s Policies are Not

      
 
 




not

No, the sky is not falling: Interpreting the latest SAT scores


Earlier this month, the College Board released SAT scores for the high school graduating class of 2015. Both math and reading scores declined from 2014, continuing a steady downward trend that has been in place for the past decade. Pundits of contrasting political stripes seized on the scores to bolster their political agendas. Michael Petrilli of the Fordham Foundation argued that falling SAT scores show that high schools need more reform, presumably those his organization supports, in particular, charter schools and accountability.* For Carol Burris of the Network for Public Education, the declining scores were evidence of the failure of polices her organization opposes, namely, Common Core, No Child Left Behind, and accountability.

Petrilli and Burris are both misusing SAT scores. The SAT is not designed to measure national achievement; the score losses from 2014 were miniscule; and most of the declines are probably the result of demographic changes in the SAT population. Let’s examine each of these points in greater detail.

The SAT is not designed to measure national achievement

It never was. The SAT was originally meant to measure a student’s aptitude for college independent of that student’s exposure to a particular curriculum. The test’s founders believed that gauging aptitude, rather than achievement, would serve the cause of fairness. A bright student from a high school in rural Nebraska or the mountains of West Virginia, they held, should have the same shot at attending elite universities as a student from an Eastern prep school, despite not having been exposed to the great literature and higher mathematics taught at prep schools. The SAT would measure reasoning and analytical skills, not the mastery of any particular body of knowledge. Its scores would level the playing field in terms of curricular exposure while providing a reasonable estimate of an individual’s probability of success in college.

Note that even in this capacity, the scores never suffice alone; they are only used to make admissions decisions by colleges and universities, including such luminaries as Harvard and Stanford, in combination with a lot of other information—grade point averages, curricular resumes, essays, reference letters, extra-curricular activities—all of which constitute a student’s complete application.

Today’s SAT has moved towards being a content-oriented test, but not entirely. Next year, the College Board will introduce a revised SAT to more closely reflect high school curricula. Even then, SAT scores should not be used to make judgements about U.S. high school performance, whether it’s a single high school, a state’s high schools, or all of the high schools in the country. The SAT sample is self-selected. In 2015, it only included about one-half of the nation’s high school graduates: 1.7 million out of approximately 3.3 million total. And that’s about one-ninth of approximately 16 million high school students.  Generalizing SAT scores to these larger populations violates a basic rule of social science. The College Board issues a warning when it releases SAT scores: “Since the population of test takers is self-selected, using aggregate SAT scores to compare or evaluate teachers, schools, districts, states, or other educational units is not valid, and the College Board strongly discourages such uses.”  

TIME’s coverage of the SAT release included a statement by Andrew Ho of Harvard University, who succinctly makes the point: “I think SAT and ACT are tests with important purposes, but measuring overall national educational progress is not one of them.”

The score changes from 2014 were miniscule

SAT scores changed very little from 2014 to 2015. Reading scores dropped from 497 to 495. Math scores also fell two points, from 513 to 511. Both declines are equal to about 0.017 standard deviations (SD).[i] To illustrate how small these changes truly are, let’s examine a metric I have used previously in discussing test scores. The average American male is 5’10” in height with a SD of about 3 inches. A 0.017 SD change in height is equal to about 1/20 of an inch (0.051). Do you really think you’d notice a difference in the height of two men standing next to each other if they only differed by 1/20th of an inch? You wouldn’t. Similarly, the change in SAT scores from 2014 to 2015 is trivial.[ii]

A more serious concern is the SAT trend over the past decade. Since 2005, reading scores are down 13 points, from 508 to 495, and math scores are down nine points, from 520 to 511. These are equivalent to declines of 0.12 SD for reading and 0.08 SD for math.[iii] Representing changes that have accumulated over a decade, these losses are still quite small. In the Washington Post, Michael Petrilli asked “why is education reform hitting a brick wall in high school?” He also stated that “you see this in all kinds of evidence.”

You do not see a decline in the best evidence, the National Assessment of Educational Progress (NAEP). Contrary to the SAT, NAEP is designed to monitor national achievement. Its test scores are based on a random sampling design, meaning that the scores can be construed as representative of U.S. students. NAEP administers two different tests to high school age students, the long term trend (LTT NAEP), given to 17-year-olds, and the main NAEP, given to twelfth graders.

Table 1 compares the past ten years’ change in test scores of the SAT with changes in NAEP.[iv] The long term trend NAEP was not administered in 2005 or 2015, so the closest years it was given are shown. The NAEP tests show high school students making small gains over the past decade. They do not confirm the losses on the SAT.

Table 1. Comparison of changes in SAT, Main NAEP (12th grade), and LTT NAEP (17-year-olds) scores. Changes expressed as SD units of base year.

SAT

2005-2015

Main NAEP

2005-2015

LTT NAEP

2004-2012

Reading

-0.12*

+.05*

+.09*

Math

-0.08*

+.09*

+.03

 *p<.05

Petrilli raised another concern related to NAEP scores by examining cohort trends in NAEP scores. The trend for the 17-year-old cohort of 2012, for example, can be constructed by using the scores of 13-year-olds in 2008 and 9-year-olds in 2004. By tracking NAEP changes over time in this manner, one can get a rough idea of a particular cohort’s achievement as students grow older and proceed through the school system. Examining three cohorts, Fordham’s analysis shows that the gains between ages 13 and 17 are about half as large as those registered between ages nine and 13. Kids gain more on NAEP when they are younger than when they are older.

There is nothing new here. NAEP scholars have been aware of this phenomenon for a long time. Fordham points to particular elements of education reform that it favors—charter schools, vouchers, and accountability—as the probable cause. It is true that those reforms more likely target elementary and middle schools than high schools. But the research literature on age discrepancies in NAEP gains (which is not cited in the Fordham analysis) renders doubtful the thesis that education policies are responsible for the phenomenon.[v]

Whether high school age students try as hard as they could on NAEP has been pointed to as one explanation. A 1996 analysis of NAEP answer sheets found that 25-to-30 percent of twelfth graders displayed off-task test behaviors—doodling, leaving items blank—compared to 13 percent of eighth graders and six percent of fourth graders. A 2004 national commission on the twelfth grade NAEP recommended incentives (scholarships, certificates, letters of recognition from the President) to boost high school students’ motivation to do well on NAEP. Why would high school seniors or juniors take NAEP seriously when this low stakes test is taken in the midst of taking SAT or ACT tests for college admission, end of course exams that affect high school GPA, AP tests that can affect placement in college courses, state accountability tests that can lead to their schools being deemed a success or failure, and high school exit exams that must be passed to graduate?[vi]

Other possible explanations for the phenomenon are: 1) differences in the scales between the ages tested on LTT NAEP (in other words, a one-point gain on the scale between ages nine and 13 may not represent the same amount of learning as a one-point gain between ages 13 and 17); 2) different rates of participation in NAEP among elementary, middle, and high schools;[vii] and 3) social trends that affect all high school students, not just those in public schools. The third possibility can be explored by analyzing trends for students attending private schools. If Fordham had disaggregated the NAEP data by public and private schools (the scores of Catholic school students are available), it would have found that the pattern among private school students is similar—younger students gain more than older students on NAEP. That similarity casts doubt on the notion that policies governing public schools are responsible for the smaller gains among older students.[viii]

Changes in the SAT population

Writing in the Washington Post, Carol Burris addresses the question of whether demographic changes have influenced the decline in SAT scores. She concludes that they have not, and in particular, she concludes that the growing proportion of students receiving exam fee waivers has probably not affected scores. She bases that conclusion on an analysis of SAT participation disaggregated by level of family income. Burris notes that the percentage of SAT takers has been stable across income groups in recent years. That criterion is not trustworthy. About 39 percent of students in 2015 declined to provide information on family income. The 61 percent that answered the family income question are probably skewed against low-income students who are on fee waivers (the assumption being that they may feel uncomfortable answering a question about family income).[ix] Don’t forget that the SAT population as a whole is a self-selected sample. A self-selected subsample from a self-selected sample tells us even less than the original sample, which told us almost nothing.

The fee waiver share of SAT takers increased from 21 percent in 2011 to 25 percent in 2015. The simple fact that fee waivers serve low-income families, whose children tend to be lower-scoring SAT takers, is important, but not the whole story here. Students from disadvantaged families have always taken the SAT. But they paid for it themselves. If an additional increment of disadvantaged families take the SAT because they don’t have to pay for it, it is important to consider whether the new entrants to the pool of SAT test takers possess unmeasured characteristics that correlate with achievement—beyond the effect already attributed to socioeconomic status.

Robert Kelchen, an assistant professor of higher education at Seton Hall University, calculated the effect on national SAT scores of just three jurisdictions (Washington, DC, Delaware, and Idaho) adopting policies of mandatory SAT testing paid for by the state. He estimated that these policies explain about 21 percent of the nationwide decline in test scores between 2011 and 2015. He also notes that a more thorough analysis, incorporating fee waivers of other states and districts, would surely boost that figure. Fee waivers in two dozen Texas school districts, for example, are granted to all juniors and seniors in high school. And all students in those districts (including Dallas and Fort Worth) are required to take the SAT beginning in the junior year. Such universal testing policies can increase access and serve the cause of equity, but they will also, at least for a while, lead to a decline in SAT scores.

Here, I offer my own back of the envelope calculation of the relationship of demographic changes with SAT scores. The College Board reports test scores and participation rates for nine racial and ethnic groups.[x] These data are preferable to family income because a) almost all students answer the race/ethnicity question (only four percent are non-responses versus 39 percent for family income), and b) it seems a safe assumption that students are more likely to know their race or ethnicity compared to their family’s income.

The question tackled in Table 2 is this: how much would the national SAT scores have changed from 2005 to 2015 if the scores of each racial/ethnic group stayed exactly the same as in 2005, but each group’s proportion of the total population were allowed to vary? In other words, the scores are fixed at the 2005 level for each group—no change. The SAT national scores are then recalculated using the 2015 proportions that each group represented in the national population.

Table 2. SAT Scores and Demographic Changes in the SAT Population (2005-2015)

Projected Change Based on Change in Proportions

Actual Change

Projected Change as Percentage of Actual Change

Reading

-9

-13

69%

Math

-7

-9

78%

The data suggest that two-thirds to three-quarters of the SAT score decline from 2005 to 2015 is associated with demographic changes in the test-taking population. The analysis is admittedly crude. The relationships are correlational, not causal. The race/ethnicity categories are surely serving as proxies for a bundle of other characteristics affecting SAT scores, some unobserved and others (e.g., family income, parental education, language status, class rank) that are included in the SAT questionnaire but produce data difficult to interpret.

Conclusion

Using an annual decline in SAT scores to indict high schools is bogus. The SAT should not be used to measure national achievement. SAT changes from 2014-2015 are tiny. The downward trend over the past decade represents a larger decline in SAT scores, but one that is still small in magnitude and correlated with changes in the SAT test-taking population.

In contrast to SAT scores, NAEP scores, which are designed to monitor national achievement, report slight gains for 17-year-olds over the past ten years. It is true that LTT NAEP gains are larger among students from ages nine to 13 than from ages 13 to 17, but research has uncovered several plausible explanations for why that occurs. The public should exercise great caution in accepting the findings of test score analyses. Test scores are often misinterpreted to promote political agendas, and much of the alarmist rhetoric provoked by small declines in scores is unjustified.


* In fairness to Petrilli, he acknowledges in his post, “The SATs aren’t even the best gauge—not all students take them, and those who do are hardly representative.”


[i] The 2014 SD for both SAT reading and math was 115.

[ii] A substantively trivial change may nevertheless reach statistical significance with large samples.

[iii] The 2005 SDs were 113 for reading and 115 for math.

[iv] Throughout this post, SAT’s Critical Reading (formerly, the SAT-Verbal section) is referred to as “reading.” I only examine SAT reading and math scores to allow for comparisons to NAEP. Moreover, SAT’s writing section will be dropped in 2016.

[v] The larger gains by younger vs. older students on NAEP is explored in greater detail in the 2006 Brown Center Report, pp. 10-11.

[vi] If these influences have remained stable over time, they would not affect trends in NAEP. It is hard to believe, however, that high stakes tests carry the same importance today to high school students as they did in the past.

[vii] The 2004 blue ribbon commission report on the twelfth grade NAEP reported that by 2002 participation rates had fallen to 55 percent. That compares to 76 percent at eighth grade and 80 percent at fourth grade. Participation rates refer to the originally drawn sample, before replacements are made. NAEP is conducted with two stage sampling—schools first, then students within schools—meaning that the low participation rate is a product of both depressed school (82 percent) and student (77 percent) participation. See page 8 of: http://www.nagb.org/content/nagb/assets/documents/publications/12_gr_commission_rpt.pdf

[viii] Private school data are spotty on the LTT NAEP because of problems meeting reporting standards, but analyses identical to Fordham’s can be conducted on Catholic school students for the 2008 and 2012 cohorts of 17-year-olds.

[ix] The non-response rate in 2005 was 33 percent.

[x] The nine response categories are: American Indian or Alaska Native; Asian, Asian American, or Pacific Islander; Black or African American; Mexican or Mexican American; Puerto Rican; Other Hispanic, Latino, or Latin American; White; Other; and No Response.

Authors

      
 
 




not

To fast or not to fast—that is the coronavirus question for Ramadan

       




not

Not just a typographical change: Why Brookings is capitalizing Black

Brookings is adopting a long-overdue policy to properly recognize the identity of Black Americans and other people of ethnic and indigenous descent in our research and writings. This update comes just as the 1619 Project is re-educating Americans about the foundational role that Black laborers played in making American capitalism and prosperity possible. Without Black…

       




not

Strengthening families, not just marriages


In their recent blog for Social Mobility Memos, Brad Wilcox, Robert Lerman, and Joseph Price make a convincing case that a stable family structure is an important factor in increased social mobility, higher economic growth, and less poverty over time.

Why is marriage so closely tied to family income?

The interesting question is: what lies behind this relationship? Why is a rise (or a smaller decline) in the proportion of married families associated, for example, with higher growth in average family incomes or a decline in poverty? The authors suggest a number of reasons, including the positive effects of marriage for children, less crime, men’s engagement in work, and income pooling. Of these, however, income pooling is by far the most important. Individual earnings have increased very little, if at all, over the past three or four decades, so the only way for families to get ahead was to add a second earner to the household. This is only possible within marriage or some other type of income pooling arrangement like cohabitation. Marriage here is the means: income pooling is the end.

Is marriage the best route to income pooling?

How do we encourage more people to share incomes and expenses? There are no easy answers. Wilcox and his co-authors favor reducing marriage penalties in tax and benefit programs, expanding training and apprenticeship programs, limiting divorces in cases where reconciliation is still possible, and civic efforts to convince young people to follow what I and others have called the “success sequence.” All of these ideas are fine in principle. The question is how much difference they can make in practice. Previous efforts have had at best modest results, as a number of articles in the recent issue of the Brookings-Princeton journal The Future of Children point out.      

Start the success sequence with a planned pregnancy

Our success sequence, which Wilcox wants to use as the basis for a pro-marriage civic campaign, requires teens and young adults to complete their education, get established in a job, and to delay childbearing until after they are married. The message is the right one.

The problem is that many young adults are having children before marriage. Why? Early marriage is not compatible, in their view, with the need for extended education and training. They also want to spend longer finding the best life partner. These are good reasons to delay marriage. But pregnancies and births still occur, with or without marriage. For better or worse, our culture now tolerates, and often glamorizes, multiple relationships, including premarital sex and unwed parenting. This makes bringing back the success sequence difficult.

Our best bet is to help teens and young adults avoid having a child until they have completed their education, found a steady job, and most importantly, a stable partner with whom they want to raise children, and with whom they can pool their income. In many cases this means marriage; but not in all. The bottom line: teens and young adults need more access and better education and counselling on birth control, especially little-used but highly effective forms as the IUD and the implant. Contraception, not marriage, is where we should be focusing our attention.

Image Source: © Gary Cameron / Reuters
     
 
 




not

Does pre-K work—or not?


In this tumultuous election year one wonders whether reasoned debate about education or other policies is still possible. That said, research has a role to play in helping policymakers make good decisions – if not before than after they are in office. So what do we know about the ability of early education to change children’s lives? At the moment, scholars are divided. One camp argues that pre-k doesn’t work, suggesting that it would be a mistake to expand it. Another camp believes that it is one of the most cost-effective things we could do to improve children’s lifetime prospects, especially if they come from disadvantaged homes.

The pre-k advocates cite several earlier demonstrations, such as the Perry Preschool and Abecedarian programs. These have been rigorously evaluated and found to improve children’s long-term success, including less use of special education, increases in high school graduation, reduced crime, and higher earnings. Participants in the Abecedarian program, for example, earned 60 percent more than controls by age 30. Mothers benefit as well since more of them are able to work. The Abecedarian project increased maternal earnings by $90,000 over the course of the mother’s career. Finally, by reducing crime, improving health, and decreasing the need for government assistance, these programs also reduce the burden on taxpayers. According to one estimate, the programs even increase GDP to the tune of $30 to $80 billion (in 2015 dollars) once the children have moved into and through their working lives. A careful summary of all this research can be found in this year’s Economic Report of the President. The Report notes, and I would emphasize, that no one study can do justice to this issue, and not every program has been successful, but the weight of the evidence points strongly to the overall success of high-quality programs. This includes not just the small, very intensive model programs, but importantly the large, publically-funded pre-school programs such as those in Boston, Tulsa, Georgia, North Carolina, and New Jersey. Some estimates put the ratio of benefits to costs at $7 to $1. Very few investments promise such a large return. Pre-k advocates admit that any gains in IQ may fade but that boosts to nonacademic skills such as self-control, motivation, and planning have long-term effects that have been documented in studies of siblings exposed to differing amounts of early education.

The pre-k critics point to findings from rigorous evaluations of the national Head Start program and of a state-wide program in Tennessee. These studies found that any gains from pre-k at the end of the program had faded by the time the children were in elementary school. They argue that the positive results from earlier model programs, such as Perry and Abecedarian, may have been the result of their small scale, their intensity, and the fact that the children involved had few alternative sources of care or early education. Children with more than adequate home environments or good substitute child care do not benefit as much, or at all, from participating in a pre-k program. In my view, this is an argument for targeted programs or for a universal program with a sliding scale fee for those who participate. In the meantime, it is too early to know what the longer-term effects of current programs will be. Despite their current popularity among scholars, one big problem with randomized controlled trials (RCTs) is that it takes a generation to get the answers you need. And, as is the case with Perry and Abecedarian, by the time you get them, they may no longer be relevant to contemporary environments in which mothers are better educated and more children have access to out-of-home care.

In the end, you can’t make public policy with RCTs alone. We need to incorporate lessons from neuroscience about the critical changes to the brain that occur in early childhood and the insights of specialists in child development. We need to consider what happens to non-cognitive skills over the longer term. We need to worry about the plight of working mothers, especially single parents, who cannot work without some form of out-of-home care. Providing that care on the cheap may turn out to be penny wise and pound foolish. (A universal child care program in Quebec funded at $5 a day led to worse behavior among the kids in the program.) Of course we need to continuously improve the effectiveness of pre-k through ongoing evaluation. That means weeding out ineffective programs along with improving curriculum, teacher preparation and pay, and better follow-up in the early grades. Good quality pre-k works; bad-quality does not. For the most disadvantaged children, it may require intervening much earlier than age 3 or 4 as the Abecedarian program did -- with strikingly good results.

Our society is coming apart. Scholars from AEI’s Charles Murray to Harvard’s Robert Putnam agree on that point. Anything that can improve the lives of the next generation should command our attention. The evidence will never be air-tight. But once one adds it all up, investing in high quality pre-k looks like a good bet to me.

Editor's note: This piece originally appeared in Real Clear Markets.

Publication: Real Clear Markets
Image Source: © Carlos Garcia Rawlins / Reute
      
 
 




not

Money for nothing: Why a universal basic income is a step too far


The idea of a universal basic income (UBI) is certainly an intriguing one, and has been gaining traction. Swiss voters just turned it down. But it is still alive in Finland, in the Netherlands, in Alaska, in Oakland, CA, and in parts of Canada. 

Advocates of a UBI include Charles Murray on the right and Anthony Atkinson on the left. This surprising alliance alone makes it interesting, and it is a reasonable response to a growing pool of Americans made jobless by the march of technology and a safety net that is overly complex and bureaucratic. A comprehensive and excellent analysis in The Economist points out that while fears about technological unemployment have previously proved misleading, “the past is not always a good guide to the future.”

Hurting the poor

Robert Greenstein argues, however, that a UBI would actually hurt the poor by reallocating support up the income scale. His logic is inescapable: either we have to spend additional trillions providing income grants to all Americans or we have to limit assistance to those who need it most. 

One option is to provide unconditional payments along the lines of a UBI, but to phase it out as income rises. Libertarians like this approach since it gets rid of bureaucracies and leaves the poor free to spend the money on whatever they choose, rather than providing specific funds for particular needs. Liberals fear that such unconditional assistance would be unpopular and would be an easy target for elimination in the face of budget pressures. Right now most of our social programs are conditional. With the exception of the aged and the disabled, assistance is tied to work or to the consumption of necessities such as food, housing, or medical care, and our two largest means-tested programs are Food Stamps and the Earned Income Tax Credit.

The case for paternalism

Liberals have been less willing to openly acknowledge that a little paternalism in social policy may not be such a bad thing. In fact, progressives and libertarians alike are loath to admit that many of the poor and jobless are lacking more than just cash. They may be addicted to drugs or alcohol, suffer from mental health issues, have criminal records, or have difficulty functioning in a complex society. Money may be needed but money by itself does not cure such ills. 

A humane and wealthy society should provide the disadvantaged with adequate services and support. But there is nothing wrong with making assistance conditional on individuals fulfilling some obligation whether it is work, training, getting treatment, or living in a supportive but supervised environment.

In the end, the biggest problem with a universal basic income may not be its costs or its distributive implications, but the flawed assumption that money cures all ills.  

Image Source: © Tom Polansek / Reuters
      
 
 




not

In Cuba, there is nothing permanent except change


Change is a complicated thing in Cuba. On the one hand, many Cubans remain frustrated with limits on economic and political opportunity, and millennials are emigrating in ever rising numbers. On the other, there is more space for entrepreneurship, and Havana is full of energy and promise today.

The island’s emerging private sector is growing—and along with it, start-up investment costs. Three years ago, Yamina Vicente opened her events planning firm, Decorazón, with a mere $500 in cash. Today she estimates she would need $5,000 to compete. New upscale restaurants are opening: Mery Cabrera returned from Ecuador to invest her savings in Café Presidente, a sleek bistro located on the busy Avenue of the Presidents. And lively bars at establishments like 304 O’Reilly feature bright mixologists doing brisk business.


Photo credit: Richard Feinberg.

Havana’s hotels are fully booked through the current high season. The overflow of tourists is welcome news for the thousands of bed-and-breakfasts flowering throughout the city (many of which are now networked through AirBnB). While most bed-and-breakfasts used to be one or two rooms rented out of people’s homes, Cubans today are renovating entire buildings to rent out. These are the green shoots of what will become boutique hotels, and Cubans are quitting their low-paying jobs in the public sector to become managers of their family’s rental offerings.

Another new sign: real estate agencies! Most Cubans own their own homes—really own them, mortgage-free. But only recently did President Raúl Castro authorize the sales of homes, suddenly giving Cubans a valuable financial asset. Many sell them to get cash to open a new business. Others, to immigrate to Miami.

WiFi hot spots are also growing in number. Rejecting an offer from Google to provide Internet access to the entire island, the Cuban government instead set up some 700 public access locations. This includes 65 WiFi hot spots in parks, hotels, or major thoroughfares, where mostly young Cubans gather to message friends or chat with relatives overseas.

Economic swings

2015 was a good year for the Cuban economy, relatively speaking. Growth rose from the disappointing 2 percent in recent years to (by official measures) 4 percent. The Brazilian joint venture cigarette company, Brascuba, reported a 17 percent jump in sales, and announced a new $120 million investment in the Mariel Economic Development Zone. Shoppers crowded state-run malls over the holiday season, too. 


Photo credit: Richard Feinberg.

Consumers still report chronic shortages in many commodities, ranging from beer to soap, and complain of inflation in food prices. Alarmed by the chronic crisis of low productivity in agriculture, the government announced tax breaks for farmers in 2016. The government is already forecasting a slower growth rate for 2016, attributed to lower commodity prices and a faltering Venezuelan economy. It’s likely to fall back to the average 2 percent rate that has characterized the past decade.

Pick up the pace

Cuban officials are looking forward to the 7th Conference of the Cuban Communist Party (CCP) in mid-April. There is little public discussion of the agenda, however. Potential initiatives include a new electoral law permitting direct election of members of the national assembly (who are currently chosen indirectly by regional assemblies or by CCP-related mass organizations); a timetable for unification of the currency (Cubans today must deal with two forms of money); some measures to empower provincial governments; and the development of a more coherent, forward-looking economic development strategy.

[T]here are now two brain drains: an internal brain drain, as government officials abandon the public sector for higher incomes in the growing private sector; and emigration overseas.

But for many younger Cubans, the pace of change is way too slow. The talk of the town remains the exit option. Converse with any well-educated millennial and they’ll tell you that half or more of their classmates are now living abroad. Indeed, there are now two brain drains: an internal brain drain, as government officials abandon the public sector for higher incomes in the growing private sector; and emigration overseas to the United States, but also to Spain, Canada, Mexico.

The challenge for the governing CCP is to give young people hope in the future. The White House has signaled that President Obama may visit Cuba this year. Such a visit by Obama—who is immensely popular on the island—could help. But the main task is essentially a Cuban one.

Richard Feinberg’s forthcoming book, “Open for Business: Building the New Cuban Economy,” will be published by Brookings Press later this year.

      
 
 




not

To talk or not to talk to Trump: A question that divides Iran

Earlier this month, Iran further expanded its nuclear enrichment program, taking another step away from the nuclear accord it had signed with world powers in July 2015. Since President Trump withdrew the U.S. from the accord, on May 2018, and re-imposed U.S. sanctions, Iran’s economy has lost nearly 10 percent of its output. Although the…

       




not

Italy: “the workers are not cannon fodder” – after the 30 March assembly, the fight for lockdown continues...

Since the beginning of the healthcare crisis, the decrees issued by the Conte government have, one after the other, increased the number of restrictions. This is on top of the ordinances from the different regions. A campaign has developed and has promoted social distancing through calls to stay at home, hashtags and appeals. But all this fervour did not affect the millions of workers forced to continue going to work in non-essential companies and services.




not

Normal winter weather is not a crisis

Weather forecasters need to stop treating it as such.




not

There's not a lot of history in the White House, actually

It's mostly a fake, completely rebuilt in the early 1950s.




not

Another Reason We Need the Smart Grid: Record Heat

In case you're still among the set doubting if the smart grid is really necessary, Earth2Tech has a solid post explaining how record heat (something that is going to happen a lot more often, unfortunately) is a prime example of how the smart grid can




not

Wretched Excess or the future of housing design? Another look at the car elevator

There is a perverse logic to this idea of bringing your car to your apartment.




not

Another look at the question: Bidet or toilet paper, or yes, adult wipes?

Apparently adult wipes are a huge growth industry. Another good reason to switch to a bidet equipped toilet.




not

Tiny house lovers can tie the knot in the Tiny Chapel

For the couple that wants to avoid a big wedding and all the trappings of large event venues, Tiny Chapel Weddings offers a decidedly smaller way to get married.




not

Eco Wine Review: Cline Cellars 2010 Cool Climate Pinot Noir

This eco-wine is bursting with red fruit aromas and vanilla. And it's minty finish is subtle yet clean so you won't mind a second glass. Which isn't a bad thing as this Pinot comes in under the $15 mark. And the winery is 100-percent solar-powered.




not

Eco Wine Review: Lynmar Estate 2008 Russian River Valley Pinot Noir

A delicate balance of dark fruit, cocoa, pepper and mushroom from a sustainable vineyard that donates to AIDS and cancer patients.