data Five Indicted in New Jersey for Largest Known Data Breach Conspiracy By www.justice.gov Published On :: Thu, 25 Jul 2013 11:14:32 EDT A federal indictment made public today in New Jersey charges five men with conspiring in a worldwide hacking and data breach scheme that targeted major corporate networks, stole more than 160 million credit card numbers and resulted in hundreds of millions of dollars in losses. It is the largest such scheme ever prosecuted in the United States. Full Article OPA Press Releases
data Two Romanian Nationals Sentenced to Prison for Scheme to Steal Payment Card Data By www.justice.gov Published On :: Wed, 4 Sep 2013 12:23:45 EDT Adrian-Tiberiu Oprea, 29, of Constanta, Romania, and Iulian Dolan, 28, of Craiova, Romania, were sentenced today to serve 15 years and seven years in prison, respectively, for participating in an international, multimillion-dollar scheme to remotely hack into and steal payment card data from hundreds of U.S. merchants’ computers. Full Article OPA Press Releases
data Attorney General Holder: Justice Dept. to Collect Data on Stops, Arrests as Part of Effort to Curb Racial Bias in Criminal Justice System By www.justice.gov Published On :: Mon, 28 Apr 2014 12:34:03 EDT Noting that African-American and Hispanic males are arrested at disproportionately high rates, U.S. Attorney General Eric Holder said Monday that the Justice Department will seek to collect data about stops, searches and arrests as part of a larger effort to analyze and reduce the possible effect of bias within the criminal justice system. Full Article OPA Press Releases
data Statement from the Department of Justice and Office of Director of National Intelligence on the Declassification of Additional Documents Regarding the Collection of Bulk Telephony Metadata Under Section 215 of the USA Patriot Act By www.justice.gov Published On :: Wed, 14 May 2014 14:55:44 EDT Today, the Department of Justice and Office of the Director of National Intelligence released, in redacted form, a previously classified series of Foreign Intelligence Surveillance Court filings and orders from 2009-2010 concerning the collection of bulk telephony metadata under Section 215 of the USA Patriot Act. These documents relate to a robust interaction that occurred between the Department of Justice and a telecommunications service provider that included the provider’s review of prior FISC applications, orders and opinions, regarding lawful compliance with those orders. Full Article OPA Press Releases
data Attorney General Holder Pledges Support for Legislation to Provide E.U. Citizens with Judicial Redress in Cases of Wrongful Disclosure of Their Personal Data Transferred to the U.S. for Law Enforcement Purposes By www.justice.gov Published On :: Fri, 29 Aug 2014 15:42:23 EDT Attorney General Eric Holder announced today that the Obama administration, as part of successfully concluding negotiations on the E.U.-U.S. Data Protection and Privacy Agreement (DPPA), would seek to work with Congress to enact legislation that would provide E.U. citizens with the right to seek redress in U.S. courts if personal data shared with U.S. authorities by their home countries for law enforcement purposes under the proposed agreement is subsequently intentionally or willfully disclosed, to the same extent that U.S. citizens could seek judicial redress in U.S. courts for such disclosures of their own law enforcement information under the Privacy Act Full Article OPA Press Releases
data Bureau of Justice Statistics Releases Tribal Crime Data Collection Activities, 2014 By www.justice.gov Published On :: Fri, 29 Aug 2014 12:25:31 EDT This fourth annual report to Congress describes efforts to collect and improve data on crime and justice in Indian country, as required by the Tribal Law and Order Act of 2010. The report details the number of tribal law enforcement agencies reporting crime data to the FBI’s Uniform Crime Reporting program. It describes BJS’s first National Survey of Tribal Court Systems which will collect data on tribal courts in the lower 48 states and Alaska covering 566 tribes Full Article OPA Press Releases
data Russian National Arraigned on Indictment for Distributing Credit Card Data Belonging to Thousands of Card Holders By www.justice.gov Published On :: Thu, 28 Aug 2014 14:57:29 EDT A Russian national indicted for hacking into point of sale systems at retailers throughout the United States and operating websites that distributed credit card data of thousands of credit card holders appeared today for arraignment in U.S. federal court, announced U.S. Attorney Jenny A. Durkan of the Western District of Washington and Assistant Attorney General Leslie R. Caldwell of the Justice Department’s Criminal Division Full Article OPA Press Releases
data Sutro Biopharma Reports Updated Data From Ovarian Cancer Study By www.rttnews.com Published On :: Mon, 27 Apr 2020 11:35:52 GMT Sutro Biopharma Inc.'s (STRO) interim phase I updated clinical data for a dose-escalation study of antibody drug-conjugate STRO-002 in ovarian cancer has been encouraging. Full Article
data Detailed Demographic Data Critical to Effective Coronavirus Response By feedproxy.google.com Published On :: Tue, 21 Apr 2020 14:04:00 -0400 Communities and policymakers working to meet the challenges of a global pandemic may need to take a range of targeted actions, such as building awareness, launching preventive measures, boosting health care infrastructure, or allocating emergency funding. These decisions, which can influence health outcomes significantly, highlight the importance of having the information needed to evaluate... Full Article
data Recipe For Managing Data Disclosure Successfully With Academic Partners: A Public Gene Therapy Company Perspective By feedproxy.google.com Published On :: Tue, 24 Mar 2020 11:17:45 +0000 This blog post was written by Deanna Petersen, CBO of AVROBIO, as part of the From The Trenches feature of LifeSciVC. When AVROBIO went public in June 2018, I found myself on the steep end of an unexpected but interesting The post Recipe For Managing Data Disclosure Successfully With Academic Partners: A Public Gene Therapy Company Perspective appeared first on LifeSciVC. Full Article Business Development From The Trenches academic partners clinical trials gene therapy
data Wanted: Data on the Gender Gap, Digital Divide and Small Businesses By www.apec.org Published On :: Fri, 06 Sep 2019 12:01:00 +0800 We need it for inclusive policymaking Full Article
data A Conservative Legal Group Significantly Miscalculated Data in a Report on Mail-In Voting By tracking.feedpress.it Published On :: 2020-05-02T11:45:00-04:00 by Derek Willis ProPublica is a nonprofit newsroom that investigates abuses of power. Sign up to receive our biggest stories as soon as they’re published. In an April report that warns of the risks of fraud in mail-in voting, a conservative legal group significantly inflated a key statistic, a ProPublica analysis found. The Public Interest Legal Foundation reported that more than 1 million ballots sent out to voters in 2018 were returned as undeliverable. Taken at face value, that would represent a 91% increase over the number of undeliverable mail ballots in 2016, a sign that a vote-by-mail system would be a “catastrophe” for elections, the group argued. However, after ProPublica provided evidence to PILF that it had in fact doubled the official government numbers, the organization corrected its figure. The number of undeliverable mail ballots dropped slightly from 2016 to 2018. The PILF report said that one in five mail ballots issued between 2012 and 2018, a total of 28.3 million, were not returned by voters and were “missing,” which, according to the organization, creates an opportunity for fraud. In a May 1 tweet that included a link to coverage of the report, President Donald Trump wrote: “Don’t allow RIGGED ELECTIONS.” PILF regularly sues state and local election officials to force them to purge some voters from registration rolls, including those it claims have duplicate registrations from another state or who are dead. It is headed by J. Christian Adams, a former Justice Department attorney who was a member of the Trump administration’s disbanded commission on election integrity. The report describes as “missing” all mail ballots that were delivered to a valid address but not returned to be counted. In a statement accompanying the report, Adams said that unaccounted-for ballots “represent 28 million opportunities for someone to cheat.” In particular, the organization argues that the number of unreturned ballots would grow if more states adopt voting by mail. Experts who study voting and use the same data PILF used in the report, which is from the Election Administration and Voting Survey produced by the federal Election Assistance Commission, say that it’s wrong to describe unreturned ballots as missing. “Election officials ‘know’ what happened to those ballots,” said Paul Gronke, a professor at Reed College, who is the director of the Early Voting Information Center, a research group based there. “They were received by eligible citizens and not filled out. Where are they now? Most likely, in landfills,” Gronke said by email. A recent RealClear Politics article based on the PILF report suggested that an increase in voting by mail this year could make the kind of fraud uncovered in North Carolina’s 9th Congressional District in 2018 more likely. In that case, a political consultant to a Republican candidate was indicted on charges of absentee ballot fraud for overseeing a paid ballot collection operation. “The potential to affect elections by chasing down unused mail-in ballots and make sure they get counted — using methods that may or may not be legal — is great,” the article argues. PILF’s report was mentioned in other news outlets including the Grand Junction Sentinel in Colorado, “PBS NewsHour” and the New York Post. The Washington Times repeated the inaccurate claim of 1 million undeliverable mail ballots. In a statement, the National Vote at Home Institute, an advocacy group, challenged the characterization of the 28.3 million ballots as missing. Of those ballots, 12 million were mailed by election officials in Colorado, Oregon and Washington, which by law send a mail-in ballot to every registered voter, roughly 30% of which are not returned for any given election. “Conflating voters choosing not to cast their ballots with ‘missing’ ballots is a fundamental flaw,” the statement reads. In an interview, Logan Churchwell, the communications director for PILF, acknowledged the error in the number of undelivered ballots, but defended the report’s conclusions, saying that it showed potential vulnerabilities in the voting system. “Election officials send these ballots out in the mail, and for them to say ‘I have no idea what happened after that’ speaks more to the investments they haven’t made to track them,” he said in a telephone interview. But 36 states have adopted processes where voters and local officials can track the status of mail ballots through delivery, much like they can track packages delivered to a home. Churchwell said there are other explanations why mail ballots are not returned and that state and local election officials could report more information about the status of mail ballots. “If you know a ballot got to a house, you can credibly say that ballot’s status is not unknown,” he said. The EAVS data has been published after every general election since 2004, although not every local jurisdiction provides complete responses to its questions. In the data, election officials are asked to provide the number of mail ballots sent to voters, the number returned to be counted and the number of ballots returned as undeliverable by the U.S. Postal Service, which provides specific ballot-tracking services. The survey also asks for the number of ballots that are turned in or invalidated by voters who chose to cast their ballots in person. It asks officials to report the number of ballots that do not fit into any of those categories, or are “otherwise unable to be tracked by your office.” Gronke described the last category as “a placeholder for elections officials to put numbers so that the whole column adds up,” and said that there was no evidence to support calling those ballots a pathway to large-scale voter fraud. Numerous academic studies have shown that cases of voter fraud are extremely rare, although they do occur, and that fraud in mail voting seems to occur more often than with in-person voting. Full Article
data Early Data Shows Black People Are Being Disproportionally Arrested for Social Distancing Violations By tracking.feedpress.it Published On :: 2020-05-08T18:22:00-04:00 by Joshua Kaplan and Benjamin Hardy ProPublica is a nonprofit newsroom that investigates abuses of power. Sign up to receive our biggest stories as soon as they’re published. On April 17 in Toledo, Ohio, a 19-year-old black man was arrested for violating the state stay-at-home order. In court filings, police say he took a bus from Detroit to Toledo “without a valid reason.” Six young black men were arrested in Toledo last Saturday while hanging out on a front lawn; police allege they were “seen standing within 6 feet of each other.” In Cincinnati, a black man was charged with violating stay-at-home orders after he was shot in the ankle on April 7; according to a police affidavit, he was talking to a friend in the street when he was shot and was “clearly not engaged in essential activities.” Ohio’s health director, Dr. Amy Acton, issued the state’s stay-at-home order on March 22, prohibiting people from leaving their home except for essential activities and requiring them to maintain social distancing “at all times.” A violation of the order is a misdemeanor, punishable by up to 90 days in jail and a $750 fine. Since the order, hundreds of people have been charged with violations across Ohio. The state has also seen some of the most prominent protests against state stay-at-home orders, as large crowds gather on the statehouse steps to flout the directives. But the protesters, most of them white, have not faced arrest. Rather, in three large Ohio jurisdictions ProPublica examined, charges of violating the order appear to have fallen disproportionately on black people. ProPublica analyzed court records for the city of Toledo and for the counties that include Columbus and Cincinnati, three of the most populous jurisdictions in Ohio. In all of them, ProPublica found, black people were at least four times as likely to be charged with violating the stay-at-home order as white people. As states across the country attempt to curb the spread of COVID-19, stay-at-home orders have proven instrumental in the fight against the novel coronavirus; experts credit aggressive restrictions with flattening the curve in the nation’s hotbeds. Many states’ orders carry criminal penalties for violations of the stay-at-home mandates. But as the weather warms up and people spend more time outside, defense lawyers and criminal justice reform advocates fear that black communities long subjected to overly aggressive policing will face similarly aggressive enforcement of stay-at-home mandates. In Ohio, ProPublica found, the disparities are already pronounced. As of Thursday night in Hamilton County, which is 27% black and home to Cincinnati, there were 107 charges for violating the order; 61% of defendants are black. The majority of arrests came from towns surrounding Cincinnati, which is 43% black. Of the 29 people charged by the city’s Police Department, 79% were black, according to data provided to ProPublica by the Hamilton County Public Defender. In Toledo, where black people make up 27% of the population, 18 of the 23 people charged thus far were black. Lt. Kellie Lenhardt, a spokeswoman for the Toledo Police Department, said that in enforcing the stay-at-home order, the department’s goal is not to arrest people and that officers are primarily responding to calls from people complaining about violations of the order. She told ProPublica that if the police arrested someone, the officers believed they had probable cause, and that while biased policing would be “wrong,” it would also be wrong to arrest more white people simply “to balance the numbers.” In Franklin County, which is 23.5% black, 129 people were arrested between the beginning of the stay-at-home order and May 4; 57% of the people arrested were black. In Cleveland, which is 50% black and is the state’s second-largest city, the Municipal Court’s public records do not include race data. The court and the Cleveland Police Department were unable to readily provide demographic information about arrests to ProPublica, though on Friday, the police said they have issued eight charges so far. In the three jurisdictions, about half of those charged with violating the order were also charged with other offenses, such as drug possession and disorderly conduct. The rest were charged only with violating the order; among that group, the percentage of defendants who were black was even higher. Franklin Country is home to Columbus, where enforcement of the stay-at-home order has made national headlines for a very different reason. Columbus is the state capital and Ohio’s largest city with a population of almost 900,000. In recent weeks, groups of mostly white protesters have campaigned against the stay-at-home order on the Statehouse steps and outside the health director’s home. Some protesters have come armed, and images have circulated of crowds of demonstrators huddled close, chanting, many without masks. No protesters have been arrested for violating the stay-at-home order, a spokesperson for the Columbus mayor’s office told ProPublica. Thomas Hach, an organizer of a group called Free Ohio Now, said in an email that he was not aware of any arrests associated with protests in the entire state. The Columbus Division of Police did not respond to ProPublica’s request for comment. Ohio legislators are contemplating reducing the criminal penalties for violating the order. On Wednesday, the state House passed legislation that would eliminate the possibility of jail time for stay-at-home violators. A first offense would result in a warning, and further violations would result in a small fine. The bill is pending in the state Senate. Penalties for violating stay-at-home orders vary across the country. In many states, including California, Florida, Michigan and Washington, violations can land someone behind bars. In New York state, violations can only result in fines. In Baltimore, police told local media they had only charged two people with violations; police have reportedly relied on a recording played over the loudspeakers of squad cars: “Even if you aren’t showing symptoms, you could still have coronavirus and accidentally spread it to a relative or neighbor. Being home is being safe. We are all in this together.” Enforcement has often resulted in controversy. In New York City, a viral video showed police pull out a Taser and punch a black man after they approached a group of people who weren’t wearing masks. Police say the man who was punched took a “fighting stance” when ordered to disperse. In Orlando, police arrested a homeless man walking a bicycle because he was not obeying curfew. In Hawaii, charges against a man accused of stealing a car battery, normally a misdemeanor punishable by up to 30 days in jail, were enhanced to a felony, which can result in 10 years in prison, because police and prosecutors said he was in violation of the state order. The orders are generally broad, and decisions about which violations to treat as acceptable and which ones to penalize have largely been left to local police departments’ discretion. Kristen Clarke, president of the Lawyers’ Committee for Civil Rights Under Law, a legal organization focused on racial justice, said such discretion has opened the door to police abuse, and she said the U.S. Department of Justice or state governments should issue detailed guidelines about when to make arrests. That discretion “is what’s given rise to these rogue practices,” she told ProPublica, “that are putting black communities and communities of color with a target on their backs.” In jails and prisons around the country, inmates have fallen ill or died from COVID-19 as the virus spreads rapidly through the facilities. Many local governments have released some inmates from jail and ordered police to reduce arrests for minor crimes. But in Hamilton County, some people charged with failing to maintain social distancing have been kept in jail for at least one night, even without any other charges. Recently, two sheriff’s deputies who work in the jail tested positive for COVID-19. “The cops put their hands on them, they cram them in the car, they take them to the [jail], which has 800 to 1400 people, depending on the night,” said Sean Vicente, director of the Hamilton County Public Defender’s misdemeanor division. “It’s often so crowded everyone’s just sitting on the floor.” Clarke said the enforcement push is sometimes undercutting the public health effort: “Protecting people’s health is in direct conflict with putting people in overcrowded jails and prisons that have been hotbeds for the virus.” Court records show that the Cincinnati Police Department has adopted some surprising applications of the law. Six people were charged with violations of the order after they were shot. Only one was charged with another crime as well, but police affidavits state that when they were shot, they were or likely were in violation of the order. One man was shot in the ankle while talking to a friend, according to court filings, and “was clearly not engaged in essential activities.” Another was arrested with the same explanation; police wrote that he had gone to the hospital with a gunshot wound. The Cincinnati Police Department did not respond to ProPublica’s requests for comment. In Springfield Township, a small, mostly white Cincinnati suburb, nine people have been arrested for violating the order thus far. All of them are black. Springfield Township Police Chief Robert Browder told ProPublica in an email that the department is “an internationally accredited law enforcement organization” and has “strict policies ... to ensure that our zero tolerance policy prohibiting bias-based profiling is adhered to.” Browder said race had not played a role in his department’s enforcement of the order and that he was “appalled if that is the insinuation.” Several of the black people arrested in Springfield Township were working for a company that sells books and magazine subscriptions door to door. One of the workers, Carl Brown, 50, said he and five colleagues were working in Springfield Township when two members of the team were arrested while going door to door. Police called the other sales people, and when they arrived at the scene, they too were arrested. Five of them, including Brown, were charged only with violating the stay-at-home order; the sixth sales person had an arrest warrant in another state, according to Browder, and police also charged her for giving them false identification. Brown said one of the officers had left the group with a warning: They should never come back, and if they do, it’s “going to be worse.” Browder denied that the officers made such a threat, and he said the police had received calls from residents about the sales people and their tactics and that the sales people had failed to register with the Police Department, as required for door-to-door solicitation. Other violations in Hamilton County have been more egregious, but even in some of those cases, the law enforcement response has stirred controversy. On April 4, a man who had streamed a party on Facebook Live, saying, “We don’t give a fuck about this coronavirus,” was arrested in Cincinnati’s Over-the-Rhine neighborhood, the setting of a 2001 riot after police fatally shot an unarmed black man. The man who streamed the party, Rashaan Davis, was charged with violating the stay-at-home order and inciting violence, and his bond was set at $350,000. After Judge Alan Triggs said he would release Davis from jail pretrial because the offense charged was nonviolent, local media reported, prosecutors dropped the misdemeanor and said they would focus on the charge of inciting violence, a felony. The Hamilton County prosecutor’s office declined to comment on Davis’ case. In Toledo, there’s been public controversy around perceived differences in the application of the law. On April 21, debate at the Toledo City Council meeting centered around a food truck. Local politicians discussed recent arrests of young black people at house parties, some contrasting them with a large, white crowd standing close together in line outside a BBQ stand, undisturbed by police. Councilmember Gary Johnson told ProPublica he’s asked the police chief to investigate why no one was arrested at a party he’d heard about, where white people were congregating on docks. “I don’t know the circumstances of the arrests,” he said. But “if you feel you need to go into poor neighborhoods and African American neighborhoods, you better be going into white neighborhoods too. … You have to say we’re going to be heavy-handed with the stay-at-home order or we’re going to be light with it. It has to be one or the other.” Toledo police enforcement has not been confined to partygoers. Armani Thomas, 20, is one of the six young men arrested for not social distancing on a lawn. He told ProPublica he was sitting there with nine friends “doing nothing” when the police pulled up. Two kids ran off, and the police made the rest stay, eventually arresting “all the dudes” and letting the girls go. He was taken to the county jail, where several inmates have tested positive, for booking and released after several hours. The men’s cases are pending. “When police see black people gathered in public, I think there’s this looming belief that they must be doing something illegal,” RaShya Ghee, a criminal defense attorney and lecturer at the University of Toledo, told ProPublica. “They’re hanging out in a yard — something illegal must have happened. Or, something illegal is about to happen.” Lenhardt, the police lieutenant, said the six men were arrested after police received 911 calls reporting “a group gathering and flashing guns.” None of the six men were arrested on gun charges. As for the 19-year-old charged for taking the bus without reason, she said police asked him on consecutive days to not loiter at a bus station. With more than 70,000 Americans dead from the coronavirus, government officials have not figured out how to balance the threat of COVID-19 with the harms of over policing, Clarke said. “On the one hand, we want to beat back the pandemic. That’s critical. That’s the end goal,” she told ProPublica. “On the other hand, we’re seeing social distancing being used as a pretext to arrest the very communities that have been hit hardest by the virus.” Full Article
data EMA starts reviewing Gilead's remdesivir data to accelerate approval of COVID-19 antiviral By www.fiercebiotech.com Published On :: Fri, 01 May 2020 08:07:12 +0000 The European Medicines Agency has begun a rolling review of data on Gilead’s remdesivir, positioning it to cut the time it takes to decide whether to approve the drug in COVID-19 patients. Full Article
data Janssen promotes R&D exec into newfound data science role By www.fiercebiotech.com Published On :: Wed, 06 May 2020 13:30:55 +0000 Following in the footsteps of an increasing number of biopharmas that want to use data to get more bang for their buck in R&D, J&J has promoted Najat Khan, Ph.D., to the role of chief data science officer. Full Article
data Chutes & Ladders—Johnson & Johnson elevates Khan to data science officer role By www.fiercebiotech.com Published On :: Thu, 07 May 2020 18:15:30 +0000 Johnson & Johnson taps Khan for chief data role; Icon poaches AstraZeneca vet Buck as CMO; Intellia signs on Lebwohl as CMO. Full Article
data Senior care homes source of nearly half of all California coronavirus deaths, data show By www.latimes.com Published On :: Fri, 8 May 2020 22:36:46 -0400 New data analyzed by the Los Angeles Times show that nearly half of all COVID-19 deaths in the state are associated with elder care facilities. Full Article
data Report says cellphone data suggests October shutdown at Wuhan lab, but experts are skeptical By www.nbcnews.com Published On :: Sat, 09 May 2020 00:12:00 GMT U.S. and U.K. intel agencies are reviewing the private report, but intel analysts examined and couldn't confirm a similar theory previously. Full Article
data Yokogawa Releases SensPlus Note, an OpreX Operation and Maintenance Improvement Solution for the Digitization of Field Data Using Mobile Devices By www.yokogawa.com Published On :: 2019-11-26T16:00:00+09:00 Yokogawa Electric Corporation (TOKYO: 6841) and MetaMoJi Corporation announce that they have jointly developed SensPlus Note, a low cost and easy to implement solution for the digitization of plant data on mobile devices. SensPlus Note, a solution in Yokogawa's OpreX Operation and Maintenance Improvement family, improves the efficiency and quality of maintenance work and the precision of post-maintenance analyses by enabling data from plant field work to be used more efficiently. This solution will be released in all markets worldwide on January 31. Full Article
data Yokogawa Releases AI-enabled Versions of SMARTDAC+ Paperless Recorders and Data Logging Software, and Environmentally Robust AI-enabled e-RT3 Plus Edge Computing Platform for Industry Applications By www.yokogawa.com Published On :: 2020-04-07T16:00:00+09:00 Yokogawa Electric Corporation (TOKYO: 6841) announces the release of artificial intelligence (AI)-enabled versions of the GX series panel-mount type paperless recorders, GP series portable paperless recorders, and GA10 data logging software, which are components of the highly operable and expandable SMARTDAC+data acquisition and control system. This new AI functionality includes the future pen, a function developed by Yokogawa that enables the drawing of predicted waveforms. Yokogawa is also releasing a new CPU module for the e-RT3 Plus edge computing platform that is environmentally robust and Python compatible. The GX/GP and e-RT3 release is set for April 8, and the GA10 software will be released on May 13. The SMARTDAC+ system is a product in the OpreX Data Acquisition family, and the e-RT3 Plus is part of the OpreX Control Devices family. Full Article
data Estimating narrow-sense heritability using family data from admixed populations By feeds.nature.com Published On :: 2020-04-09 Full Article
data Cost-effectiveness of <i>CYP2C19-</i>guided antiplatelet therapy in patients with acute coronary syndrome and percutaneous coronary intervention informed by real-world data By feeds.nature.com Published On :: 2020-02-11 Full Article
data Combining clinical and candidate gene data into a risk score for azathioprine-associated leukopenia in routine clinical practice By feeds.nature.com Published On :: 2020-02-14 Full Article
data Functional neural correlates of psychopathy: a meta-analysis of MRI data By feeds.nature.com Published On :: 2020-05-06 Full Article
data Interpretation of omics data analyses By feeds.nature.com Published On :: 2020-05-08 Full Article
data Subduction megathrust heterogeneity characterized from 3D seismic data By feeds.nature.com Published On :: 2020-04-20 Full Article
data The National Microbiome Data Collaborative: enabling microbiome science By feeds.nature.com Published On :: 2020-04-29 Full Article
data Building PhoneGap applications powered by Database.com By www.adobe.com Published On :: Wed Jun 13 22:57:00 UTC 2012 Learn how to create mobile apps built using PhoneGap, with data served and persisted using Database.com. Full Article
data From Digital Diplomacy to Data Diplomacy By www.belfercenter.org Published On :: Jan 14, 2020 Jan 14, 2020The digital revolution arrived late at the heart of ministries of foreign affairs across the Western world. Ministries latched on to social media around the time of Tahrir Square and Iran’s 2009 Green Revolution, beguiled by a vision of the technology engendering a networked evolution toward more liberal societies. Full Article
data Bridging Transatlantic Differences on Data and Privacy After Snowden By webfeeds.brookings.edu Published On :: Tue, 20 May 2014 07:30:00 -0400 “Missed connections” is the personals ads category for people whose encounters are too fleeting to form any union – a lost-and-found for relationships. I gave that title to my paper on the conversation between the United States and for Europe on data, privacy, and surveillance because I thought it provides an apt metaphor for the hopes and frustrations on both sides of that conversation. The United States and Europe are linked by common values and overlapping heritage, an enduring security alliance, and the world’s largest trading relationship. Europe has become the largest crossroad of the Internet and the transatlantic backbone is the global Internet’s highest capacity route. [I] But differences in approaches to the regulation of the privacy of personal information threaten to disrupt the vast flow of information between Europe and the U.S. These differences have been exacerbated by the Edward Snowden disclosures, especially stories about the PRISM program and eavesdropping on Chancellor Angela Merkel’s cell phone. The reaction has been profound enough to give momentum to calls for suspension of the “Safe Harbor” agreement that facilitates transfers of data between the U.S. Europe; and Chancellor Merkel, the European Parliament, and other EU leaders who have called for some form of European Internet that would keep data on European citizens inside EU borders. So it can seem like the U.S. and EU are gazing at each other from trains headed in opposite directions. My paper went to press before last week’s European Court of Justice ruling that Google must block search results showing that a Spanish citizen had property attached for debt several years ago. What is most startling about the decision is this information was accurate and had been published in a Spanish newspaper by government mandate but – for these reasons – the newspaper was not obligated to remove the information from its website; nevertheless, Google could be required to remove links to that website from search results in Spain. That is quite different from the way the right to privacy has been applied in America. The decision’s discussion of search as “profiling” bears out what the paper says about European attitudes toward Google and U.S. Internet companies. So the decision heightens the differences between the U.S. and Europe. Nonetheless, it does not have to be so desperate. In my paper, I look at the issues that have divided the United States and Europe when it comes to data and the things they have in common, the issues currently in play, and some ways the United States can help to steer the conversation in the right direction. [I] "Europe Emerges as Global Internet Hub," Telegeography, September 18, 2013. Authors Cameron F. Kerry Image Source: © Yves Herman / Reuters Full Article
data Missed Connections: Talking With Europe About Data, Privacy, and Surveillance By webfeeds.brookings.edu Published On :: Tue, 20 May 2014 11:57:00 -0400 The United States exports digital goods worth hundreds of billions of dollars across the Atlantic each year. And both Silicon Valley and Hollywood do big business with Europe every year. Differences in approaches to privacy have always made this relationship unsteady but the Snowden disclosures greatly complicated the prospects of a Transatlantic Trade and Investment Partnership. In this paper Cameron Kerry examines that politics of transatlantic trade and the critical role that U.S. privacy policy plays in these conversations. Kerry relies on his experience as the U.S.’s chief international negotiator for privacy and data regulation to provide an overview of key proposals related to privacy and data in Europe. He addresses the possible development of a European Internet and the current regulatory regime known as Safe Harbor. Kerry argues that America and Europe have different approaches to protecting privacy both which have strengths and weaknesses. To promote transatlantic trade the United states should: Not be defensive about its protection of privacy Provide clear information to the worldwide community about American law enforcement surveillance Strengthen its own privacy protection Focus on the importance of trade to the American and European economies Downloads Download the paper Authors Cameron F. Kerry Image Source: © Francois Lenoir / Reuters Full Article
data New polling data show Trump faltering in key swing states—here’s why By webfeeds.brookings.edu Published On :: Fri, 08 May 2020 17:25:27 +0000 While the country’s attention has been riveted on the COVID-19 pandemic, the general election contest is quietly taking shape, and the news for President Trump is mostly bad. After moving modestly upward in March, approval of his handling of the pandemic has fallen back to where it was when the crisis began, as has his… Full Article
data USAID's public-private partnerships: A data picture and review of business engagement By webfeeds.brookings.edu Published On :: Mon, 29 Feb 2016 11:49:00 -0500 In the past decade, a remarkable shift has occurred in the development landscape. Specifically, acknowledgment of the central role of the private sector in contributing to, even driving, economic growth and global development has grown rapidly. The data on financial flows are dramatic, indicating reversal of the relative roles of official development assistance and private financial flows. This shift is also reflected in the way development is framed and discussed, never more starkly than in the Addis Abba Action Agenda and the new set of Sustainable Development Goals (SDGs). The Millennium Development Goals (MDGs), which the SDGs follow, focused on official development assistance. In contrast, while the new set of global goals does not ignore the role of official development assistance, they reorient attention to the role of the business sector (and mobilizing host country resources). The U.S. Agency for International Development (USAID) has been in the vanguard of donors in recognizing the important role of the private sector to development, most notably via the agency’s launch in 2001 of a program targeted on public-private partnerships (PPPs) and the estimated 1,600 USAID PPPs initiated since then. This paper provides a quantitative and qualitative presentation of USAID’s public-private partnerships and business sector participation in those PPPs. The analysis offered here is based on USAID’s PPP data set covering 2001-2014 and interviews with executives of 17 U.S. corporations that have engaged in PPPs with USAID. The genesis of this paper is the considerable discussion by USAID and the international development community about USAID’s PPPs, but the dearth of information on what these partnerships entail. USAID’s 2014 release (updated in 2015) of a data set describing nearly 1,500 USAID PPPs since 2001 offers an opportunity to analyze the nature of those PPPs. On a conceptual level, public-private partnerships are a win-win, even a win-win-win, as they often involve three types of organizations: a public agency, a for-profit business, and a nonprofit entity. PPPs use public resources to leverage private resources and expertise to advance a public purpose. In turn, non-public sectors—both businesses and nongovernmental organizations (NGOs)—use their funds and expertise to leverage government resources, clout, and experience to advance their own objectives, consistent with a PPP’s overall public purpose. The data from the USAID data set confirm this conceptual mutual reinforcement of public and private goals. The goal is to utilize USAID’s recently released data set to draw conclusions on the nature of PPPs, the level of business sector engagement, and, utilizing interviews, to describe corporate perspectives on partnership with USAID. The arguments regarding “why” PPPs are an important instrument of development are well established. This paper presents data on the “what”: what kinds of PPPs have been implemented and in what countries, sectors, and income contexts. There are other research and publications on the “how” of partnership construction and implementation. What remains missing are hard data and analysis, beyond the anecdotal, as to whether PPPs make a difference—in short, is the trouble of forming these sometimes complex alliances worth the impact that results from them? The goal of this paper is not to provide commentary on impact since those data are not currently available on a broad scale. Similarly, this paper does not recommend replicable models or case studies (which can be found elsewhere), though these are important and can help new entrants to join and grow the field. Rather, the goal is to utilize USAID’s recently released data set to draw conclusions on the nature of PPPs, the level of business sector engagement, and, utilizing interviews, to describe corporate perspectives on partnership with USAID. The decision to target this research on business sector partners’ engagement in PPPs—rather than on the civil society, foundation, or public partners—is based on several factors. First, USAID’s references to its PPPs tend to focus on the business sector partners, sometimes to the exclusion of other types of partners; we want to understand the role of the partners that USAID identifies as so important to PPP composition. Second, in recent years much has been written and discussed about corporate shared value, and we want to assess the extent to which shared value plays a role in USAID’s PPPs in practice. The paper is divided into five sections. Section I is a consolidation of the principal data and findings of the research. Section II provides an in-depth “data picture” of USAID PPPs drawn from quantitative analysis of the USAID PPP data set and is primarily descriptive of PPPs to date. Section III moves beyond description and provides analysis of PPPs and business sector alignment. It contains the results of coding certain relevant fields in the data set to mine for information on the presence of business partners, commercial interests (i.e., shared value), and business sector partner expertise in PPPs. Section IV summarizes findings from a series of interviews of corporate executives on partnering with USAID. Section V presents recommendations for USAID’s partnership-making. Downloads WP94PPPReport2016Web Authors George IngramAnne E. JohnsonHelen Moser Full Article
data Five years after Busan—how does the U.S. stack up on data transparency? By webfeeds.brookings.edu Published On :: Wed, 13 Apr 2016 09:00:00 -0400 Publish What You Fund’s 2016 Aid Transparency Index is out. And as a result, today we can assess whether major donors met the commitments they made five years ago at Busan to make aid transparent by the end of 2015. The index is also a window into the state of foreign aid transparency and how the U.S.—the world’s largest bilateral donor—stacks up. The global picture On the positive side, the index found that ten donors of varied types and sizes, accounting for 25 percent of total aid, have met the commitment to aid transparency. And more than half of the 46 organizations included in the 2016 index now publish data to the International Aid Transparency Initiative (IATI) registry at least quarterly. At the same time, the index’s assessments show more than half of the organizations still fall into the lowest three categories, scoring below 60 percent in terms of the transparency of their information. The U.S. picture Continuing its leadership on transparency, the Millennium Challenge Corporation comes in second overall in the index, meeting its Busan commitment and once again demonstrating that the institutional commitment to publishing and using its data continues. Otherwise, at first glance, U.S. progress seems disappointing. The five other U.S. donors included in the 2016 index are all in the “fair” category. Seen through a five-year lens, however, these same five U.S. donors were either in the “poor” or “very poor” categories in the 2011 index. So, all agencies have moved up, and three of them—U.S. Agency for International Development (USAID), Department of the Treasury, and the U.S. President's Emergency Plan for AIDS Relief—are on the cusp of “good.” In the two biggest U.S. agencies that administer foreign assistance, USAID and the State Department, the commitment is being institutionalized and implemented through more systematic efforts to revamp their outdated information systems. Both have reviewed the gaps in their data reporting systems and developed a path forward. USAID’s Cost Management Plan identifies specific steps to be taken and is well under way. The State Department Foreign Assistance Data Review (FADR) involves further reviews that need to be executed promptly in order to lead to action. Both are signs of a heightened commitment to data transparency and both require continued agency leadership and staff implementation. The Department of Defense, which slid backwards in the last three assessments (and began at the "very poor" category in 2011), has for the first time moved into the "fair" category. It is still the lowest performing U.S. agency in the index, but it is now publishing 12 new IATI fields. It is moving in the right direction, but significant work remains to be done. The third U.S. National Action Plan (NAP) announced last fall—the strongest issued by the U.S. to date—calls for improvements to quality and comprehensiveness of U.S. data and commits the U.S. to doing more to raise awareness, accessibility, and demand for foreign assistance data. This gives all U.S. agencies the imperative to do much more to make their aid information transparent and usable. Going forward—what should the U.S. being focusing on? The overall challenge has been laid out in the third NAP: Almost all of the U.S. agencies need to improve the breadth and depth of the information they are publishing to meet IATI standards. Far too often, basic information—such as titles—are either not published or are not useful. The Millennium Challenge Corporation should continue its leadership role, especially on data use. All agencies should be promoting the use of data among their own staff and by external stakeholders, especially at country level. Feedback will go a long way toward helping them improve the quality of the data they are publishing and thereby help them meet the IATI standards. USAID must finish the work on its Cost Management Plan, including putting IATI in the planned Development Information Solution. Additionally, more progress needs to be made on the follow-up to the Aid Transparency Country Pilot Assessment to meet the needs of partners. The State Department needs to follow through on including IATI in the new integrated solution mapped out in its data review. The leadership of all foreign affairs agencies needs to work harder to make the business case for compiling, publishing, and using data on foreign aid programs. Open data, particularly when it is comparable, timely, accessible, and comprehensive, is an extremely valuable management asset. Agency leadership should be its champion. So far, we have not seen enough. U.S. progress on aid transparency was slow to start. It is still not where it needs to be. But with a modest but concerted push, three additional agencies will be in the “good” category and that is a story we can start to be proud of. We look forward to continued progress and to the day when all U.S. foreign aid meets transparency standards—a day I believe will be an important one for the cause of greater development, better governance, democratic participation, and reduced poverty worldwide. Authors George Ingram Full Article
data On December 10, 2019, Tanvi Madan discussed the policy implications of the Silk Road Diplomacy with AIDDATA in New Delhi, India. By webfeeds.brookings.edu Published On :: Tue, 10 Dec 2019 20:37:05 +0000 On December 10, 2019, Tanvi Madan discussed the policy implications of the Silk Road Diplomacy with AIDDATA in New Delhi, India. Full Article
data Development Seminar | Unemployment and domestic violence — New evidence from administrative data By webfeeds.brookings.edu Published On :: Wed, 12 Feb 2020 13:09:07 +0000 We hosted a Development Seminar on “Unemployment and domestic violence — new evidence from administrative data” with Dr. Sonia Bhalotra, Professor of Economics at University of Essex. Abstract: This paper provides possibly the first causal estimates of how individual job loss among men influences the risk of intimate partner violence (IPV), distinguishing threats from assaults. The authors find… Full Article
data The value of systemwide, high-quality data in early childhood education By webfeeds.brookings.edu Published On :: Thu, 20 Feb 2020 17:38:04 +0000 High-quality early learning experiences—those filled with stimulating and supportive interactions between children and caregivers—can have long-lasting impacts for children, families, and society. Unfortunately, many families, particularly low-income families, struggle to find any affordable early childhood education (ECE) program, much less programs that offer engaging learning opportunities that are likely to foster long-term benefits. This post… Full Article
data Class Notes: Elite college admissions, data on SNAP, and more By webfeeds.brookings.edu Published On :: Wed, 27 Nov 2019 14:48:42 +0000 This week in Class Notes: Harvard encourages applications from many students who have very little chance of being admitted, particularly African Americans Wages for low-skilled men have not been influenced by changes in the occupational composition of workers. Retention rates for the social insurance program SNAP (Supplemental Nutrition Assistance Program) are low, even among those who remain eligible.… Full Article
data Investigations into using data to improve learning By webfeeds.brookings.edu Published On :: Mon, 13 Feb 2017 22:15:57 +0000 In 2010, the Australian Commonwealth Government, in partnership with the Australian states and territories, created an online tool called My School. The objective of My School was to enable the collation and publication of data about the nearly 10,000 schools across the country. Effectively offering a report card for each Australian school,[1] My School was… Full Article
data Lessons in using data to improve education: An Australian example By webfeeds.brookings.edu Published On :: Mon, 13 Feb 2017 22:32:40 +0000 When it comes to data, there is a tendency to assume that more is always better; but the reality is rarely this simple. Data policies need to consider questions around design, implementation, and use. To offer an illustrative example, in 2010 the Australian Federal government launched the online tool My School to collect and publish… Full Article
data Big Data and Sustainable Development: Evidence from the Dakar Metropolitan Area in Senegal By webfeeds.brookings.edu Published On :: Thu, 23 Apr 2015 11:43:00 -0400 There is a lot of hope around the potential of Big Data—massive volumes of data (such as cell phone GPS signals, social media posts, online digital pictures and videos, and transaction records of online purchases) that are large and difficult to process with traditional database and software techniques—to help achieve the sustainable development goals. The United Nations even calls for using the ongoing Data Revolution –the explosion in quantity and diversity of Big Data—to make more and better data usable to inform development analysis, monitoring and policymaking: In fact, the United Nations believes that that “Data are the lifeblood of decision-making and the raw material for accountability. Without high-quality data providing the right information on the right things at the right time; designing, monitoring and evaluating effective policies becomes almost impossible.” The U.N. even held a “Data Innovation for Policy Makers” conference in Jakarta, Indonesia in November 2014 to promote use of Big Data in solving development challenges. Big Data has already played a role in development: Early uses of it include the detection of influenza epidemics using search engine query data or the estimation of a country’s GDP by using satellite data on night lights. Work is also under way by the World Bank to use Big Data for transport planning in Brazil. During the Data for Development session at the recent NetMob conference at MIT, we presented a paper in which we jump on the Big Data bandwagon. In the paper, we use mobile phone data to assess how the opening of a new toll highway in Dakar, Senegal is changing how people commute to work (human mobility) in this metropolitan area. The new toll road is one of the largest investments by the government of Senegal and expectations for its developmental impact are high. In particular, the new infrastructure is expected to increase the flow of goods and people into and out of Dakar, spur urban and rural development outside congested areas, and boost land valuation outside Dakar. Our study is a first step in helping policymakers and other stakeholders benchmark the impact of the toll road against many of these objectives. Assessing how the impact of the new toll highway differs by area and how it changes over time can help policymakers benchmark the performance of their investment and better plan the development of urban areas. The Dakar Diamniadio Toll Highway The Dakar Diamniadio Toll Highway (in red in Figure 1), inaugurated on August 1, 2013 is the first section (32 km or 20 miles) of a broader project to connect the capital, Dakar, through a double three-lane highway to a new airport (Aeroport International Blaise Diagne, AIBD) and a special economic zone, the Dakar Integrated Special Economic Zone (DISEZ) and the rest of the country. Note: The numbers indicate the incidence of increased inter cell mobility and were used to calculate the percentage increase in mobility. The cost of this large project is estimated to be about $696 million (FCFA 380.2 billion or 22.7 percent of 2014 fiscal revenues, excluding grants) with the government of Senegal having already disbursed $353 million. The project is one of the first toll roads in sub-Saharan Africa (excluding South Africa) structured as a public-private partnership (PPP) and includes multilateral partners such as the World Bank, the French Development Agency, and the African Development Bank. In our study, we ask whether the new toll road led to an increase in human mobility and, if so, whether particular geographical areas experienced higher or lower mobility relative to others following its opening. Did the Highway Increase Human Mobility? Using mobile phone usage data (Big Data), we use statistical analysis in our paper to approximate where people live and where they work. We then estimate how the reduction in travel time following the opening of the toll road changes the way they commute to work. As illustrated in the map of Figure 1, we find some interesting trends: Human mobility in the metropolitan Dakar area increased on average by 1.34 percent after the opening of the Dakar Diamniadio Toll Highway. However, this increase masks important disparities across the different sub-areas of the Dakar metropolitan areas. Areas in blue in Figure 1 are those for which mobility increased after the opening of the new road toll while those in red experienced decreased mobility. In particular, the Parcelles Assainies suburban area benefited the most from the toll road with an increase in mobility of 26 percent. The Centre Ville (downtown) area experienced a decrease in mobility of about 20 percent. These trends are important and would have been difficult to discover without Big Data. Now, though, researchers need to parse through the various reasons these trends might have occurred. For instance, the Parcelles Assainies area may have benefited the most because of its closer location to the toll road whereas the feeder roads in the downtown area may not have been able to absorb the increase in traffic from the toll road. Or people may have moved from the downtown area to less expensive areas in the suburbs now that the new toll road makes commuting faster. The Success of Big Data From these preliminary results (our study is work in progress, and we will be improving its methodology), we are encouraged by the fact that our method and use of Big Data has three areas of application for a project such as this: Benchmarking: Our method can be used to track how the impact of the Dakar Diamniadio Toll Highway changes over time and for different areas of the Dakar metropolitan areas. This process could be used to study other highways in the future and inform highway development overall. Zooming in: Our analysis is a first step towards a more granular study of the different geographic areas within the Dakar suburban metropolitan area, and perhaps inspire similar studies around the continent. In particular, it would be useful to study the socio-economic context within each area to better appreciate the impact of new infrastructure on people’s lives. For instance, in order to move from estimates of human mobility (traffic) to measures of “accessibility,” it will be useful to complement the current analysis with an analysis of land use, a study of job accessibility, and other labor markets information for specific areas. Regarding accessibility, questions of interest include: Who lives in the areas most/least affected? What kind of jobs do they have access to? What type of infrastructure do they have access to? What is their income level? Answers to these questions can be obtained using satellite information for land prices, survey data (including through mobile phones) and data available from the authorities. Regarding urban planning, questions include: Is the toll diverting the traffic to other areas? What happens in those areas? Do they have the appropriate infrastructure to absorb the increase in traffic? Zooming out: So far, our analysis is focused on the Dakar metropolitan area, and it would be useful to assess the impact of new infrastructure on mobility between the rest of the country and Dakar. For instance, the analysis can help assess whether the benefits of the toll road spill over to the rest of the country and even differentiate the impact of the toll road on the different regions of the country. This experience tells us that there are major opportunities in converting Big Data into actionable information, but the impact of Big Data still remains limited. In our case, the use of mobile phone data helped generate timely and relatively inexpensive information on the impact of a large transport infrastructure on human mobility. On the other hand, it is clear that more analysis using socioeconomic data is needed to get to concrete and impactful policy actions. Thus, we think that making such information available to all stakeholders has the potential not only to guide policy action but also to spur it. References Atkin, D. and D. Donaldson (2014). Who ’ s Getting Globalized ? The Size and Implications of Intranational Trade Costs . (February). Clark, X., D. Dollar, and A. Micco (2004). Port efficiency, maritime transport costs, and bilateral trade. Journal of Development Economics 75(2), 417–450, December. Donaldson, D. (2013). Railroads of the Raj: Estimating the Impact of Transportation Infrastructure. forthcoming, American Economic Review. Fetzer Thiemo (2014) “Urban Road Construction and Human Commuting: Evidence from Dakar, Senegal.” Mimeo Ji, Y. (2011). Understanding Human Mobility Patterns Through Mobile Phone Records : A cross-cultural Study. Simini, F., M. C. Gonzalez, A. Maritan, and A.-L. Barab´asi (2012). A universal model for mobility and migration patterns. Nature 484(7392), 96–100, April. Tinbergen, J. (1962). Shaping the World Economy; Suggestions for an International Economic Policy. Yuan, Y. and M. Raubal (2013). Extracting dynamic urban mobility patterns from mobile phone data. Authors Thiemo FetzerAmadou Sy Image Source: © Normand Blouin / Reuters Full Article
data Big Data for improved diagnosis of poverty: A case study of Senegal By webfeeds.brookings.edu Published On :: Tue, 02 Jun 2015 15:07:00 -0400 It is estimated that there are 95 mobile phone subscriptions per 100 inhabitants worldwide, and this boom has not been lost on the developing world, where the number of mobile users has also grown at rocket speed. In fact, in recent years the information communication technology (ICT) revolution has provided opportunities leading to “death of distance,” allowing many obstacles to better livelihoods, especially for those in remote regions, to disappear. Remarkably, though, the huge proportion of poverty-stricken populations in so many of those same regions persists. How might, then, we think differently on the relationship between these two ideas? Can and how might ICTs act as an engine for eradicating poverty and improving the quality of life in terms of better livelihoods, strong education outcomes, and quality health? Do today's communication technologies hold such potential? In particular, the mobile phone’s accessibility and use creates and provides us with an unprecedented volume of data on social interactions, mobility, and more. So, we ask: Can this data help us better understand, characterize, and alleviate poverty? Mapping call data records, mobility, and economic activity The first step towards alleviating poverty is to generate poverty maps. Currently, poverty maps are created using nationally representative household surveys, which require manpower and time. Such maps are generated at a coarse regional resolution and continue to lag for countries in sub-Saharan Africa compared to the rest of the world. As call data records (CDRs) allow a view of the communication and mobility patterns of people at an unprecedented scale, we show how this data can be used to create much more detailed poverty maps efficiently and at a finer spatial resolution. Such maps will facilitate improved diagnosis of poverty and will assist public policy planners in initiating appropriate interventions, specifically at the decentralized level, to eradicate human poverty and ensure a higher quality of life. How can we get such high resolution poverty maps from CDR data? In order to create these detailed poverty maps, we first define the virtual network of a country as a “who-calls-whom” network. This signifies the macro-level view of connections or social ties between people, dissemination of information or knowledge, or dispersal of services. As calls are placed for a variety of reasons, including request for resources, information dissemination, personal etc., CDRs provide an interesting way to construct a virtual network for Senegal. We start by quantifying the accessibility of mobile connectivity in Senegal, both spatially and across the population, using the CDR data. This quantification measures the amount of communication across various regions in Senegal. The result is a virtual network for Senegal, which is depicted in Figure 1. The circles in the map correspond to regional capitals, and the edges correspond to volume of mobile communication between them. Thicker edges mean higher volume of communication. Bigger circles mean heavier incoming and outgoing communication for that region. Figure 1: Virtual network for Senegal with MPI as an overlay Source: Author’s rendering of the virtual network of Senegal based on the dataset of CDRs provided as a part of D4D Senegal Challenge 2015 Figure 1 also shows the regional poverty index[1] as an overlay. A high poverty index corresponds to very poor regions, which are shown lighter green on the map. It is evident that regions with plenty of strong edges have lower poverty, while most poor regions appear isolated. Now, how can we give a more detailed look at the distribution of poverty? Using the virtual network, we extract quantitative metrics indicating the centrality of each region in Senegal. We then calculate centrality measures of all the arrondissements[2] within a region. We then correlate these regional centrality measures with the poverty index to build a regression model. Using the regression model, we predict the poverty index for each arrondissement. Figure 2 shows the poverty map generated by our model for Senegal at an arrondissement level. It is interesting to see finer disaggregation of poverty to identify pockets of arrondissement, which are most in need of sustained growth. The poorer arrondissements are shown lighter green in color with high values for the poverty index. Figure 2: Predicted poverty map at the arrondissement level for Senegal with MPI as an overlay Source: Author’s rendering of the virtual network of Senegal based on the dataset of CDRs provided as a part of D4D Senegal Challenge 2015. What is next for call data records and other Big Data in relation to eradicating poverty and improving the human development? This investigation is only the beginning. Since poverty is a complex phenomenon, poverty maps showcasing multiple perspectives, such as ours, provide policymakers with better insights for effective responses for poverty eradication. As noted above, these maps can be used for decomposing information on deprivation of health, education, and living standards—the main indicators of human development index. Even more particularly, we believe that this Big Data and our models can generate disaggregated poverty maps for Senegal based on gender, the urban/rural gap, or ethnic/social divisions. Such poverty maps will assist in policy planning for inclusive and sustained growth of all sections of society. Our methodology is generic and can be used to study other socio-economic indicators of the society. Like many uses of Big Data, our model is in its nascent stages. Currently, we are working towards testing our methodology at the ground level in Senegal, so that it can be further updated based on the needs of the people and developmental interventions can be planned. The pilot project will help to "replicate" our methodology in other underdeveloped countries. In the forthcoming post-2015 development agenda intergovernmental negotiations, the United Nations would like to ensure the “measurability, achievability of the targets” along with identification of 'technically rigorous indicators' for development. It is in this context that Big Data can be extremely helpful in tackling extreme poverty. Note: This examination was part of the "Data for Development Senegal" Challenge, which focused on how to use Big Data for grass-root development. We took part in the Data Challenge, which was held in conjunction with NetMob 2015 at MIT from April 7-10, 2015. Our team received the National Statistics prize for our project titled, "Virtual Network and Poverty Analysis in Senegal.” This blog reflects the views of the authors only and does not reflect the views of the Africa Growth Initiative. [1] As a measure of poverty, we have used the Multidimensional Poverty Index (MPI), which is a composite of 10 indicators across the three areas: education (years of schooling, school enrollment), health (malnutrition, child mortality), and living conditions. [2] Senegal is divided into 14 administrative regions, which are further divided into 123 arrondissements. Authors Neeti PokhriyalWen DongVenu Govindaraju Full Article
data Don’t let perfect be the enemy of good: To leverage the data revolution we must accept imperfection By webfeeds.brookings.edu Published On :: Thu, 14 Apr 2016 09:30:00 -0400 Last month, we experienced yet another breakthrough in the epic battle of man against machine. Google’s AlphaGo won against the reigning Go champion Lee Sedol. This success, however, was different than that of IBM’s Deep Blue against Gary Kasparov in 1987. While Deep Blue still applied “brute force” to calculate all possible options ahead, AlphaGo was learning as the game progressed. And through this computing breakthrough that we can learn how to better leverage the data revolution. In the game of Go, brute-force strategies don’t help because the total number of possible combinations exceeds the number of atoms in the universe. Some games, including some we played since childhood, were immune to computing “firepower” for a long time. For example, Connect Four wasn’t solved until 1995 with the conclusion being the first player can force a win. And checkers wasn’t until 2007, when Jonathan Schaeffer determined that in a perfect game, both sides could force a draw. For chess, a safe strategy has yet to be developed, meaning that we don’t know yet if white could force a win or, like in checkers, black could manage to hold on to a draw. But most real-life situations are more complicated than chess, precisely because the universe of options is unlimited and solving them requires learning. If computers are to help, beyond their use as glorified calculators, they need to be able to learn. This is the starting point of the artificial intelligence movement. In a world where perfection is impossible, you need well-informed intuition in order to advance. The first breakthrough in this space occurred when IBM’s Watson beat America’s Jeopardy! champions in 2011. These new intelligent machines operate in probabilities, not in certainty. That being said, perfection remains important, especially when it comes to matters of life and death such as flying airplanes, constructing houses, or conducting heart surgery, as these areas require as much attention to detail as possible. At the same time, in many realms of life and policymaking we fall into a perfection trap. We often generate obsolete knowledge by attempting to explain things perfectly, when effective problem solving would have been better served by real-time estimates. We strive for exactitude when rough results, more often than not, are good enough. By contrast, some of today’s breakthroughs are based on approximation. Think of Google Translate and Google’s search engine itself. The results are typically quite bad, but compared to the alternative of not having them at all, or spending hours leafing through an encyclopedia, they are wonderful. Moreover, once these imperfect breakthroughs are available, one can improve them iteratively. Only once the first IBM and Apple PCs were put on the market in the 1980s did the cycle of upgrading start, which still continues today. In the realm of social and economic data, we have yet to reach this stage of “managed imperfection” and continuous upgrading. We are producing social and economic forecasts with solid 20th century methods. With extreme care we conduct poverty assessments and maps, usually taking at least a year to produce as they involve hundreds of enumerators, lengthy interviews and laborious data entry. Through these methods we are able to perfectly explain past events, but we fail to estimate current trends—even imperfectly. The paradox of today’s big data era is that most of that data is poor and messy, even though the possibilities for improving it are unlimited. Almost every report from development institutions starts with a disclaimer highlighting “severe data limitations.” This is because only 0.5 percent of all the available data is actually being curated to be made usable. If data is the oil of the 21st century, we need data refineries to convert the raw product into something that can be consumed by the average person. Thanks to the prevalence of mobile device and rapid advances in satellite technology, it is possible to produce more data faster, better, and cheaper. High-frequency data also makes it possible to make big data personal, which also increases the likelihood that people act on it. Ultimately, the breakthroughs in big data for development will be driven by managerial cultures, as has been the case with other successful ventures. Risk averse cultures pay great attention to perfection. They nurture the fear of mistakes and losing. Modern management accepts failure, encourages trial and error, and reaches progress through interaction and continuous upgrading. Authors Wolfgang Fengler Full Article
data Building a more data-literate city: A Q&A with HyeSook Chung By webfeeds.brookings.edu Published On :: Tue, 15 Dec 2015 11:00:00 -0500 DC KIDS COUNT, housed at the nonprofit DC Action for Children, is the DC chapter of a nationwide network of local-level organizations aiming to provide a community-by-community picture of the conditions of children. The 26 year-old project is funded by the Annie E. Casey Foundation and its aim is to provide high-quality data and trend analysis as well as help local governments monitor budget and legislative decisions based on evidence of what works for children and families. As we pointed out in our recent papers and a blog, developing reliable and comprehensive data is a critical step to building effective community partnerships and producing outcomes that improve economic mobility and health in a neighborhood. We discussed these issues with HyeSook Chung, Executive Director of DC Action for Children. Q. Please summarize the history of the DC Kids Count project. What motivated it, and how it has evolved over the last years? A. As part of the nationwide Kids Count network, each chapter tracks a number of indicators on child and family well-being through an online database called Kids Count Data Center. Each chapter also releases a yearly data book which summarizes the state of child well-being within their state or locality. When DC Action for Children became the host of DC Kids Count in 2012, I wanted to rethink the way we presented our data to move beyond the traditional print format into the exciting realm of visualizing data. This led to the beginning of our partnership with DataKind, a group of dedicated pro-bono data scientists who worked with us to create an interactive, web-based data tool that maps out indicators of child well-being across DC’s 39 neighborhood clusters. We know that the neighborhood children grow up in, and the resources they have access to, plays a huge role in shaping children’s future opportunities. The maps we created with our Data Tool 2.0 reveal sharp disparities in DC neighborhoods: some DC neighborhoods are wealthy and have many assets, while others are characterized by high levels of poverty. The many challenges that come with high poverty neighborhoods include: poorer performing schools, more crime, and less access to libraries, parks, and healthy foods. Q. What type of indicators do you gather? How many years does the data cover? What level of granularity does the data have? A. We track a variety of indicators of child well-being, including demographics, economic well-being, health and safety. The data is housed online in two places: The KIDS COUNT Data Center and our Data Tool 2.0. The Data Tool 2.0 maps the most recent available data at the neighborhood cluster, while the Data Center allows for a wider range of geographies (citywide and ward level) and different timeframes. Many of the indicators have data from 1990 to the present. Q. How do you measure the data tool’s impact on policy and legislation? A. We have made it a priority to conduct internal evaluations to assess the utilization of the online tool, but we also believe that measuring the tool’s impact must go beyond traditional web analytics. We regularly use the Data Tool 2.0 in our work with city officials and direct service providers to offer an overview of the social context in the city’s different neighborhoods. In a city where the allocation of resources is often guided by personal relationships and old-school politics, it is important to show clearly whether budget decisions are aligned with the needs of our children. We believe that our Data Tool 2.0 project can bring much needed transparency to the allocation of the DC government budget and help achieve agreement. Q. The DC Kids Count project is helping build data capacity across organizations, with the aim of creating a more “data-literate” city. Could you tell us about some of these initiatives? A. Businesses like Amazon and Netflix increasingly focus on finding “actionable” insights from their data. For them, “big data” analytics can help answer tough business questions. With the right platforms for analytics, they can increase efficiency or even improve operations and sales. In a similar manner, we at DC Action for Children believe that big data opens up the opportunity for us to improve and reshape our strategy and decision making process to better align services with the needs of DC children in the same way Amazon or Netflix does with their customers. For instance, we are offering the Child and Family Services Agency technical and data analysis support for their Healthy Families Thriving Communities Collaboratives, which are a citywide network of community-based organizations designed to embed family supports in their communities. Their mission is to strengthen and stabilize families and to prevent child abuse and neglect by offering services in the form of case management and support. We use KIDS COUNT data at the ward and neighborhood levels to highlight needs in the community and inform their planning. This encourages the Collaboratives’ staff to look at data differently—integrating it as a vital part of their program planning and strategy. Q. What are some of the obstacles and challenges you face in integrating the data, and updating it? A. Historically, our data analysis looked at more traditional indicators, such as program enrollment and the number of child welfare cases. But now we think we can use our access to big data to pull out patterns within our datasets and help guide the decisions of the city administrators. For example, if we are trying to prevent future child abuse cases, we can look at patterns analyzing family and child data in specific neighborhoods. We can use the type of predictive analysis practiced in the for-profit business to help us serve DC children more efficiently and effectively. One of the most significant obstacles we face is ensuring that the indicators are up-to-date. This can be an issue with government agencies since some of them are slow in their release of new data. Moreover, there is also no standard format across local agencies for how data is collected and released. Furthermore, data is often aggregated at different geographical units, like zip codes or census tracts. To get the data ready to upload to our Data Tool, we must recalculate the data into neighborhood clusters. Q. What policy changes would help produce better data-sharing ecosystems? A. DC has in many ways demonstrated leadership in data sharing. The Office of the Chief Technology Officer works to make a large variety of datasets publicly available. We have also seen large investments over the years to create new data systems that track progress and service delivery for different agencies. But our city can do more to promote a data-sharing ecosystem. So can other cities. While multiple agencies are adopting innovative data systems, the systems are often siloed and do not speak to each other. Moreover, since data is tracked differently across agencies, based on needs and requirements for reporting, it is difficult for agencies to share data both publicly and internally. It is also often difficult to get access to de-identified disaggregated data for richer analysis. We are glad that many agencies recognize the value of robust data collection, but more data transparency policies would give us a better understanding of the challenges that lie behind improving the wellbeing of children in the city. Q. What are the next steps for the DC Kids Count project, and how do you expect it to grow over the next few years? A. We just finished wrapping up some of the final work on our DataTool 2.0. In terms of next steps, we are working on a handbook that explains how we created our Data Tool so that other Kids Count chapters and organizations can replicate and adapt our tool. We would also like to add local budget data to the asset maps to see if public investments align with the neighborhoods that need it the most. This would give us a more nuanced understanding of the geography of DC budget investments, including inequities in investments by geography and demographics. Big data analytics has changed the way we focus our priorities and engage in business practices. I’m committed to this movement. I think that, through big data, we can also revolutionize the way we do policy. *** In conclusion, DC Kids Count, housed at the nonprofit DC Action for Children, belongs to a larger, nationwide group of organizations helping to better coordinate regional development through data-driven decision making. By centralizing different government databases, and providing real-time, community level data, DC Kids Count can help local government entities allocate their resources more efficiently and creatively and help foster place-conscious strategies. The process behind compiling the data also illustrates many of the challenges—data sharing, interoperability of data systems, access to real-time data involved in building “data- sharing ecosystems.” Authors Stuart M. ButlerJonathan Grabinsky Full Article
data More data can make college less risky By webfeeds.brookings.edu Published On :: Thu, 21 Jan 2016 05:00:00 -0500 There are lots of good reasons to go to college, but the vast majority of prospective students in this country report[i] that they’ll go to college because they believe that it will improve their employment opportunities and financial wellbeing. And for the most part, they’re right. Despite many suggestions to the contrary, it’s very well documented[ii] that investments in higher education pay large dividends in the form of future earnings. This makes higher education one of the most important tools we have for generating social mobility. Regardless of an individual’s starting point in life, higher education offers access to greater financial well-being. Unfortunately, it’s not a fail proof system. Investments in education, like investments in the stock market, do not come without risk. In financial markets, access to information is one way investors mitigate risk. Mutual funds, for example, disclose average returns over various time periods for certain categories of investments (e.g. large-cap funds, emerging market funds, technology funds, etc.), in addition to other information. These data, moreover, are widely and freely available through consumer-oriented websites like Yahoo Finance, Vanguard, and E-Trade. Yet, for higher education, students have had access to no analogous information until quite recently. For decades, economists discussed the average benefits of a college education compared to a high school education with no regard to either field of study or institution. Finally, in 2009, the Census Bureau started collecting data that could be used to assess which majors pay the most,[iii] and then just a few months ago, the Department of Education released data on the earnings of alumni by institution, for all students who receive federal grants or loans. These data can be further analyzed, as we have done, to estimate the economic contribution of schools (or value-added) as distinct from the outcomes attributable to student characteristics (like test scores).[iv] Still, even with these data advances, students cannot compare earnings by major across institutions, except in a handful of cases using state data systems. Here, we illustrate how data by major and institution can inform the decision of what to study and where using data from Texas. Suppose first that this student is a Texas resident and has decided she would like to pursue a bachelor’s degree at a public institution in her state. Our data on alumni earnings by major comes from the Texas Higher Education Board, and we combine it with information on the net cost of tuition from the Department of Education’s IPEDS database as reported in the College Scorecard.[v] We use these data to estimate the ten-year return on investment for each institution in the state of Texas by major. We calculate an estimate of ten-year return by summing the average earnings faced by graduates over the first ten years following graduation[vi] and subtracting off the wage they would have received as a high school graduate without a degree (taking into account additional years of earnings when they would have been enrolled in college). To estimate this benchmark, we used data on Texas residents from the Annual Social and Economic Supplement to the Current Population Survey, obtained via IPUMS CPS.[vii] We then subtract the institution specific costs[viii] to get the ten-year financial return. Since education pays off over a lifetime, this isn’t the ideal exercise, but it’s still informative. We’ve estimated these returns based on the population of individuals who both complete their degree and do not go on to complete graduate study. Ideally, these estimated expected returns would be adjusted to account for how earnings and costs are affected by non-completion. Indeed, the average rate of completion across these schools is only 48 percent. This is a quick and dirty method for estimating returns that fails to take into account a number of selection issues,[ix] but we believe that it still provides an effective illustration of risk in higher education. Figure 1 illustrates the potential average outcome facing our Texas student, who is deciding between bachelor’s degree programs from the set of public institutions in her home state. We’ve plotted the distribution of financial returns for the set of potential expected outcomes, which are defined as all combinations of institution and major. To be clear, the distribution of potential outcomes would be far wider if we were using individual specific variation (i.e. the fact that some students will ultimately earn more than others, even with the same degree from the same institution) and the real possibility of non-completion. We know that, on average, this student will face a positive return on her investment, wherever she chooses to go. The average rate of return across all possible choices facing this student is quite a sizeable 11.3 percent (or $216,000 in undiscounted 2014 dollars). At a systemic level that’s important. Still, the standard deviation is 6.7, with a low return of a -6.6 percent (Animal Science at Sul Ross State) and a high return of 79.8 percent (Registered Nursing at UT Brownsville). Out of 1065 combinations of majors and schools, 19 yielded average negative returns. This was true even for two programs at the selective UT Austin campus (Visual and Performing Arts and Classics). 1.1 percent of students who graduated in 2004 were in a major-institution combination that yielded a net return below 4 percent. In such cases, they would have been better off putting their dollars into treasury bills. Figure 1. Mean return on bachelor’s degree investment by institution and major, for Texas residents who graduated in 2004 from a Texas public college Students who know what they want to major in could benefit greatly from knowing which school is likely to generate the largest pay off (it would be nice to know this in terms of learning as well as money, but that is another more complicated matter). We’ve illustrated the distribution of potential outcomes for two different popular majors, Liberal Arts and Sciences and Electrical Engineering.[x] Both majors clearly offer a significant average rate of return across all institutions (12 for Liberal Arts and 20 for Electrical Engineering), but depending on which major they choose the student will face a different level of risk in their future earnings. The variation (standard deviation) in the expected rate of return across institutions is much larger for Liberal Arts majors (5.7) than for Electrical Engineering majors (3.7). Yet, while these facts may discourage people from pursuing a Liberal Arts major in the abstract, the plot below does show that some Liberal Arts majors out-earn their peers in electrical engineering. For example, Liberal Arts majors from UT Austin earned a higher return than electrical engineering majors at UT Dallas, the University of Houston, and three other UT campuses. Thus, these more detailed facts can actually encourage students to pursue majors that look economically bad for the average student but quite attractive at a particular school with a strong program. Figure 2. Distribution of earnings 10 years after graduation for bachelor’s degree holders with an Electrical Engineering or Liberal Arts degree, for Texas residents and 2004 graduates from Texas public colleges The point is that college degrees, like other investments, are risky, but information goes a long way to clarify the nature of that risk and improve the quality of investment decisions. In addition to providing students and the public greater access to data on market performance of alumni, there are a number of innovations both in the policy arena and in the private market that could help make college investments less risky. First of all, innovative financing systems that allow students to pay for their investment over a longer period of time and tie repayment to earnings would greatly limit downside risk for students. Second, institutions have the capacity to shoulder some of this risk, and a proposal known as risk-sharing[xi] is gaining some traction and would require schools to pay the federal government some portion of loan default losses. On a voluntary basis, some colleges have offered on-time graduation guarantees[xii] and wage guarantees.[xiii] And last, new business models in higher education could help mitigate risk. Part of the problem in the current system comes from the all-or-nothing regime in which students have to invest in a bundle of coursework (i.e. a degree) in order to reap significant returns. The growing prominence of new models, like micro-credentials[xiv] and coding boot camps,[xv] can offer alternatives that don’t require students to put all of their eggs in one basket. [i] http://www.edcentral.org/collegedecisions/ [ii] http://www.brookings.edu/blogs/jobs/posts/2012/10/05-jobs-greenstone-looney [iii] https://www.census.gov/prod/2012pubs/acsbr11-10.pdf [iv] http://www.brookings.edu/research/reports2/2015/10/29-earnings-data-college-scorecard-rothwell [v] Alumni earnings are reported to us at the field of study and institutional level for all alumni who graduated from a Texas four-year public institution in 2004 and were working in Texas one year, three years, five years, 8 years, or ten years after graduation up until 2015. The sample is further restricted to bachelor’s degree only recipients who did not go on to earn a higher degree. The underlying data source removed workers earning more than one million dollars. [vi] Cumulative earnings were calculated for each major-institution combination imputing earnings for missing years using the average of the two observations closest in time. Earnings were further adjusted to 2015 dollars using the Consumer Price Index. [vii] This sample was limited to individuals who were born in 1982 and working and not enrolled in school. Mean high school earnings were averaged across individuals for over 14 years (2000 to 2014). [viii] Cost is estimated using average tuition revenue per full time student less institutional discounts and allowances. We sum this variable over four years (2001 to 2004) and adjust to 2015 dollars. Note that this average is likely to be reasonably accurate even for students who take longer to graduate because in such cases they are likely enroll in fewer classes per year, incurring lower expenses. We did not include the cost of living, because students would have had to pay those costs if they were not enrolled in college. [ix] For instance, we might expect that college graduates would earn higher wages than the typical high school graduate even if they did not have a college degree. Essentially, our study does not take into account the fact that wages are a function of both individual characteristics and college quality. For the purposes of policy, a value-added measure has the capacity to overcome some of the limitations of this brief study. [x] The Liberal Arts and Science major is described here: https://nces.ed.gov/ipeds/cipcode/cipdetail.aspx?y=55&cipid=88372 [xi] http://www.brookings.edu/research/papers/2015/11/17-colleges-local-economies-rothwell [xii] https://www.pdx.edu/four [xiii] http://adrian.edu/admissions/financial-aid/adrianplus [xiv] http://ssir.org/articles/entry/the_case_for_social_innovation_micro_credentials [xv] http://www.npr.org/sections/ed/2014/12/20/370954988/twelve-weeks-to-a-six-figure-job Authors Beth AkersJonathan Rothwell Image Source: © Lucas Jackson / Reuters Full Article
data Who is eligible to claim the new ACA premium tax credit this year? A look at data from 10 states By webfeeds.brookings.edu Published On :: Tue, 14 Apr 2015 15:51:00 -0400 Each year millions of low- to moderate-income Americans supplement their income by claiming the Earned Income Tax Credit (EITC) during tax season. Last year, 1 in 5 taxpayers claimed the credit and earned an average of nearly $2,400. This tax season, some of those eligible for the EITC may also be able to claim, for the first time, a new credit created by the Affordable Care Act (ACA) to offset the cost of purchasing health insurance for lower-income Americans. It’s called the ACA premium tax credit. To qualify for the ACA premium tax credit, filers need first to have an annual income that falls between 100 and 400 percent of the federal poverty line (between $11,670 and $46,680 for a single-person household in 2014). Beyond the income requirements, however, filers must also be ineligible for other public or private insurance options like Medicaid or an employer-provided plan. Why the tax credit overlap matters Identifying the Americans eligible for both credits is important because it sheds light on how many still need help paying for health insurance even after the ACA extended coverage options. In a recent study of the EITC-eligible population, Elizabeth Kneebone, Jane R. Williams, and Natalie Holmes estimated what share of EITC-eligible filers might also qualify for the ACA premium tax credit this year. Below, see a list of the top 10 states with the largest overlap between filers eligible for the EITC and those estimated to qualify for the ACA premium tax credit.* Notably, none of these states has expanded Medicaid coverage to low-income families after the passage of the ACA. Nationally, an estimated 7.5 million people (4.2 million “tax units”) are likely eligible for both the ACA premium tax credit and the EITC. Nearly 1.3 million of those tax units are from the following ten states. 1. Florida Overlap: 22.5 percent / 405,924 tax units State-based exchange? No Expanded Medicaid coverage? No 2. Texas Overlap: 21.4 percent / 513,061 tax units State-based exchange? No Expanded Medicaid coverage? No 3. South Dakota Overlap: 20.5 percent / 15,124 tax units State-based exchange? No Expanded Medicaid coverage? No 4. Georgia Overlap: 19.8 percent / 186,020 tax units State-based exchange? No Expanded Medicaid coverage? No 5. Louisiana Overlap: 19.6 percent / 86,512 tax units State-based exchange? No Expanded Medicaid coverage? No 6. Idaho Overlap: 19.3 percent / 28,855 tax units State-based exchange? Yes Expanded Medicaid coverage? No 7. Montana Overlap: 18.9 percent / 18,138 tax units State-based exchange? No Expanded Medicaid coverage? No 8. Wyoming Overlap: 18.4 percent / 7,276 tax units State-based exchange? No Expanded Medicaid coverage? No 9. Utah Overlap: 18.1 percent / 42,284 State-based exchange? No (Utah runs a small businesses marketplace, but it relies on the federal government for an individual marketplace) Expanded Medicaid coverage? No 10. Oklahoma Overlap: 18.0% / 63,045 tax units State-based exchange? No Expanded Medicaid coverage? No * For the purposes of this list, we measured the overlap in “tax units,” not people. One tax unit equals a single tax return. If a family of four together qualifies for the ACA premium tax credit, they would be counted as one tax unit, not four, since they filed jointly with one tax return. Authors Delaney Parrish Image Source: © Rick Wilking / Reuters Full Article
data New local data on EITC benefits by number of children By webfeeds.brookings.edu Published On :: Tue, 13 Oct 2015 17:18:00 -0400 One in five tax filers in the United States claims the Earned Income Tax Credit—a refundable federal tax credit targeted to low-income working Americans that has proven to be one of the nation’s most effective anti-poverty policies. Last year, at tax time the average EITC filer claimed just over $2,400 through the credit. However, the share of filers claiming the EITC and the level of benefits they receive vary widely within and across communities, as shown by the local-level IRS data we post each year on our EITC Interactive data tool. For instance, almost one in three filers in the Memphis metro area claimed the credit (32 percent) in tax year 2013 compared to just 12 percent of filers in metro Boston. Local labor market conditions can affect these numbers, like the incidence and concentration of low-wage jobs or regional differences in cost of living and average wage levels. But the credit itself is also designed to vary across different kinds of filers and families. Maximum credit levels for workers without children are quite small, but they increase considerably for workers with one, two, or three children—boosting the credit’s work incentive and anti-poverty impacts. For the first time, our EITC Interactive tool now includes data on how EITC receipt varies by the number of children claimed. According to that data, last tax year workers without qualifying children received an average credit of $281 (Figure 1). Although they made up almost one in four EITC filers, childless workers accounted for just 3 percent of EITC dollars claimed, due to the small size of their credit (Figure 2). In contrast, workers with one child—the largest share of EITC filers (37 percent)—claimed an average credit of $2,316. Workers with two kids accounted for 27 percent of EITC filers, but with an average credit of $3,682 they took home 40 percent of all EITC dollars. Working families with three or more children made up the smallest share of EITC filers last tax year, but claimed the largest credit on average at $4,036. These data, which are available down to the ZIP code level, offer insights into the ways in which the makeup of the EITC population (and the low-wage workforce more generally) varies across places. Returning to the Memphis and Boston regions, each metro area received more than half a billion dollars through the EITC last year ($517 and $512 million, respectively). However, the number of filers claiming the EITC was much larger in metro Boston (256,456) than in the Memphis metro area (178,241). In part, these numbers reflect the fact that 30 percent of metro Boston’s EITC filers were childless workers. In the Memphis metro area, just 15 percent of EITC filers did not have qualifying children, while 41 percent had one child, 31 percent had two children, and 12 percent had three or more children—higher than Boston’s share of EITC filers with children across the board (37 percent had one child, 24 percent had two children, and 9 percent had three or more children). For EITC outreach campaigns working to ensure eligible filers claim the EITC at tax time, and for practitioners looking to use tax time to connect low-income workers to financial services and benefits, these numbers give a sense of who lives in their community and how to target their services. For advocates and policymakers, these numbers help shed light on how potential changes to the credit might affect different places. For instance, the Obama administration, several legislators, and at least one presidential candidate have proposed expanding the EITC for workers without qualifying children to make it a more effective poverty alleviation and work support tool. Every congressional district in the country has childless workers or noncustodial parents who would stand to benefit from that expansion. But that expansion would be particularly important for the more than 240 districts—largely clustered on the coasts and roughly split between Republican and Democratic representatives—with above average shares of childless EITC filers (Map 1). In contrast, if Congress does not act to make recent expansions to the credit permanent, every district will see a cut in EITC benefits in 2017, when the credit for workers with three or more children is set to disappear. In particular, more than 200 districts with above average shares of EITC filers with three or more kids—this time predominantly Republican districts clustered in the Intermountain West, parts of the Great Plains, and along the Texas border—would be most affected (Map 2). In the coming weeks, we will be delving deeper into the impact of proposed and potential changes to the EITC and releasing new resources on the EITC-eligible population and the credit’s anti-poverty impact. In the meantime, these new EITC Interactive data offer an important resource that can help practitioners, policymakers, advocates, and researchers better understand how the EITC affects low-income workers and families and their communities across the country. Authors Elizabeth Kneebone Full Article
data FAFSA completion rates matter: But mind the data By webfeeds.brookings.edu Published On :: Thu, 05 Jul 2018 09:00:06 +0000 FAFSA season has just ended -- the final deadline to fill out the 2017-18 Free Application for Federal Student Aid (FAFSA) was June 30. This year, as every year, many students who are eligible for aid will have failed to complete the form.1 This means many miss out on financial aid, which can have a… Full Article
data Hope in heterogeneity: Big data, opportunity and policy By webfeeds.brookings.edu Published On :: Thu, 01 Feb 2018 17:58:30 +0000 “Big data” is particularly useful for demonstrating variation across large groups. Using administrative tax data, for example, Stanford economist Raj Chetty and his colleagues have shown big differences in upward mobility rates by geography, by the economic background of students at different colleges, by the earnings of students taught by different teachers, and so on.… Full Article
data Facilitating biomarker development and qualification: Strategies for prioritization, data-sharing, and stakeholder collaboration By webfeeds.brookings.edu Published On :: Tue, 27 Oct 2015 09:00:00 -0400 Event Information October 27, 20159:00 AM - 5:00 PM EDTEmbassy Suites Convention Center900 10th St NWWashington, DC 20001 Strategies for facilitating biomarker developmentThe emerging field of precision medicine continues to offer hope for improving patient outcomes and accelerating the development of innovative and effective therapies that are tailored to the unique characteristics of each patient. To date, however, progress in the development of precision medicines has been limited due to a lack of reliable biomarkers for many diseases. Biomarkers include any defined characteristic—ranging from blood pressure to gene mutations—that can be used to measure normal biological processes, disease processes, or responses to an exposure or intervention. They can be extremely powerful tools for guiding decision-making in both drug development and clinical practice, but developing enough scientific evidence to support their use requires substantial time and resources, and there are many scientific, regulatory, and logistical challenges that impede progress in this area. On October 27th, 2015, the Center for Health Policy at The Brookings Institution convened an expert workshop that included leaders from government, industry, academia, and patient advocacy groups to identify and discuss strategies for addressing these challenges. Discussion focused on several key areas: the development of a universal language for biomarker development, strategies for increasing clarity on the various pathways for biomarker development and regulatory acceptance, and approaches to improving collaboration and alignment among the various groups involved in biomarker development, including strategies for increasing data standardization and sharing. The workshop generated numerous policy recommendations for a more cohesive national plan of action to advance precision medicine. Event Materials 1027 Brookings biomarkers workshop agenda1027 Biomarkers workshop backgrounderfinal1027 Biomarkers workshop slide deckfinal1027 Biomarkers workshop participant listfinal Full Article