crowdsourcing

[ E.812 (05/20) ] - Crowdsourcing approach for the assessment of end-to-end quality of service in fixed and mobile broadband networks

Crowdsourcing approach for the assessment of end-to-end quality of service in fixed and mobile broadband networks




crowdsourcing

[ P.808 (06/18) ] - Subjective evaluation of speech quality with a crowdsourcing approach

Subjective evaluation of speech quality with a crowdsourcing approach




crowdsourcing

PSTR-CROWDS - Subjective evaluation of media quality using a crowdsourcing approach

PSTR-CROWDS - Subjective evaluation of media quality using a crowdsourcing approach




crowdsourcing

SEC Clarifies Crowdsourcing Rules, What's the Impact on Renewables?

The SEC has finally proposed its rules to allow crowd-funding under the Jumpstart Our Business Startups (JOBS) Act. What do they mean for small-scale investments in renewable energy companies and projects?




crowdsourcing

Crowdsourcing the Olinguito

One year ago, the olinguito (Bassaricyon neblina) stepped out of the forest shadows into the spotlight and onto the pages of science—the first carnivore species […]

The post Crowdsourcing the Olinguito appeared first on Smithsonian Insider.




crowdsourcing

Toward Improving the Evaluation of Visual Attention Models: a Crowdsourcing Approach. (arXiv:2002.04407v2 [cs.CV] UPDATED)

Human visual attention is a complex phenomenon. A computational modeling of this phenomenon must take into account where people look in order to evaluate which are the salient locations (spatial distribution of the fixations), when they look in those locations to understand the temporal development of the exploration (temporal order of the fixations), and how they move from one location to another with respect to the dynamics of the scene and the mechanics of the eyes (dynamics). State-of-the-art models focus on learning saliency maps from human data, a process that only takes into account the spatial component of the phenomenon and ignore its temporal and dynamical counterparts. In this work we focus on the evaluation methodology of models of human visual attention. We underline the limits of the current metrics for saliency prediction and scanpath similarity, and we introduce a statistical measure for the evaluation of the dynamics of the simulated eye movements. While deep learning models achieve astonishing performance in saliency prediction, our analysis shows their limitations in capturing the dynamics of the process. We find that unsupervised gravitational models, despite of their simplicity, outperform all competitors. Finally, exploiting a crowd-sourcing platform, we present a study aimed at evaluating how strongly the scanpaths generated with the unsupervised gravitational models appear plausible to naive and expert human observers.




crowdsourcing

ZebraLancer: Decentralized Crowdsourcing of Human Knowledge atop Open Blockchain. (arXiv:1803.01256v5 [cs.HC] UPDATED)

We design and implement the first private and anonymous decentralized crowdsourcing system ZebraLancer, and overcome two fundamental challenges of decentralizing crowdsourcing, i.e., data leakage and identity breach.

First, our outsource-then-prove methodology resolves the tension between the blockchain transparency and the data confidentiality to guarantee the basic utilities/fairness requirements of data crowdsourcing, thus ensuring: (i) a requester will not pay more than what data deserve, according to a policy announced when her task is published via the blockchain; (ii) each worker indeed gets a payment based on the policy, if he submits data to the blockchain; (iii) the above properties are realized not only without a central arbiter, but also without leaking the data to the open blockchain. Second, the transparency of blockchain allows one to infer private information about workers and requesters through their participation history. Simply enabling anonymity is seemingly attempting but will allow malicious workers to submit multiple times to reap rewards. ZebraLancer also overcomes this problem by allowing anonymous requests/submissions without sacrificing accountability. The idea behind is a subtle linkability: if a worker submits twice to a task, anyone can link the submissions, or else he stays anonymous and unlinkable across tasks. To realize this delicate linkability, we put forward a novel cryptographic concept, i.e., the common-prefix-linkable anonymous authentication. We remark the new anonymous authentication scheme might be of independent interest. Finally, we implement our protocol for a common image annotation task and deploy it in a test net of Ethereum. The experiment results show the applicability of our protocol atop the existing real-world blockchain.




crowdsourcing

Polymath proposal: clearinghouse for crowdsourcing COVID-19 data and data cleaning requests

After some discussion with the applied math research groups here at UCLA (in particular the groups led by Andrea Bertozzi and Deanna Needell), one of the members of these groups, Chris Strohmeier, has produced a proposal for a Polymath project to crowdsource in a single repository (a) a collection of public data sets relating to […]




crowdsourcing

Crowdsourcing Project Aims to Document the Many U.S. Places Where Women Have Made History

The National Trust for Historic Preservation is looking for 1,000 places tied to women's history, and to share the stories of the figures behind them




crowdsourcing

Quantum Computing Gets a Boost From AI and Crowdsourcing

Can an online game that combines human brainpower with AI solve intractable problems?




crowdsourcing

Accumulating Evidence Using Crowdsourcing and Machine Learning: A Living Bibliography about Existential Risk and Global Catastrophic Risk

The study of existential risk — the risk of human extinction or the collapse of human civilization — has only recently emerged as an integrated field of research, and yet an overwhelming volume of relevant research has already been published. To provide an evidence base for policy and risk analysis, this research should be systematically reviewed. In a systematic review, one of many time-consuming tasks is to read the titles and abstracts of research publications, to see if they meet the inclusion criteria. The authors show how this task can be shared between multiple people (using crowdsourcing) and partially automated (using machine learning), as methods of handling an overwhelming volume of research.




crowdsourcing

Accumulating Evidence Using Crowdsourcing and Machine Learning: A Living Bibliography about Existential Risk and Global Catastrophic Risk

The study of existential risk — the risk of human extinction or the collapse of human civilization — has only recently emerged as an integrated field of research, and yet an overwhelming volume of relevant research has already been published. To provide an evidence base for policy and risk analysis, this research should be systematically reviewed. In a systematic review, one of many time-consuming tasks is to read the titles and abstracts of research publications, to see if they meet the inclusion criteria. The authors show how this task can be shared between multiple people (using crowdsourcing) and partially automated (using machine learning), as methods of handling an overwhelming volume of research.




crowdsourcing

Can crowdsourcing be ethical?


In the course of my graduate work at Harvard University, I paid hundreds of Americans living in poverty the equivalent of about $2 an hour. It was perfectly legal for me to do so, and my research had the approval of my university’s ethics board. I was not alone, or even unusual, in basing Ivy League research on less-than-Walmart wages; literally thousands of academic research projects pay the same substandard rates. Social scientists cannot pretend that the system is anything but exploitative. It is time for meaningful reform of crowdsourced research.

This is what crowdsourced research looks like. I posted a survey using Mechanical Turk (MTurk), a website run by Amazon.com. Across the country, hundreds of MTurk workers (“turkers”) agreed to fill out the survey in exchange for about 20 cents apiece, and within a few days I had my survey results. The process was easy, and above all, cheap. No wonder it is increasingly popular with academics; a search on Google Scholar returns thousands of academic papers citing MTurk, increasing from 173 in 2008 to 5,490 in 2014.

Mechanical Turk is a bargain for researchers, but not for workers. A survey typically takes a couple minutes per person, so the hourly rate is very low. This might be acceptable if all turkers were people with other jobs, for whom the payment was incidental. But scholars have known for years that the vast majority of MTurk tasks are completed by a small set of workers who spend long hours on the website, and that many of those workers are very poor. Here are the sobering facts:

  • About 80 percent of tasks on MTurk are completed by about 20 percent of participants that spend more than 15 hours a week working on the site. MTurk works not because it has many hobbyists, but because it has dedicated people who treat the tasks like a job.
  • About one in five turkers are earning less than $20,000 a year.
  • A third of U.S. turkers call MTurk an important source of income, and more than one in ten say they use MTurk money to make basic ends meet.

Journal articles that refer to Mechanical Turk. Source: PS: Political Science and Politics

It is easy to forget that these statistics represent real people, so let me introduce you to one of them. “Marjorie” is a 53-year-old woman from Indiana who had jobs in a grocery store and as a substitute teacher before a bad fall left her unable to work. Now, she says, “I sit there for probably eight hours a day answering surveys. I’ve done over 8,000 surveys.” For these full days of work, Marjorie estimates that she makes “$100 per month” from MTurk, which supplements the $189 she receives in food stamps. Asked about her economic situation, Marjorie simply says that she is “poverty stricken.”

I heard similar stories from other MTurk workers—very poor people, often elderly or disabled, working tremendous hours online just to keep themselves and their families afloat. I spoke to a woman who never got back on her feet after losing her home in Hurricane Rita, and another who had barely escaped foreclosure. A mother of two was working multiple jobs, plus her time MTurk, to keep her family off government assistance. Job options are few for many turkers, especially those who are disabled, and MTurk provides resources they might not otherwise have. But these workers that work anonymously from home are isolated and have few avenues to organize for higher wages or other employment protections.

Once I realized how poorly paid my respondents were, I went back and gave every one of my over 1,400 participants a “bonus” to raise the survey respondent rate to the equivalent of a $10 hourly wage. (I paid an additional $15 to respondents who participated in an interview.) This cost me a little bit more money, but less than you might imagine. For a 3-minute survey of 800 people, going from a 20-cent to a 50-cent payment costs an additional $240. But if every researcher paid an ethical wage, it would really add up for people like Marjorie. In fact, it would likely double her monthly income from MTurk.

Raising wages is a start, but it should not be up to individual researchers to impose workplace standards. In this month’s PS: Political Science and Politics, a peer-reviewed journal published for the American Political Science Association, I have called for new standards for crowdsourced research to be implemented not only by individual researchers, but also by universities, journals, and grantmakers. For instance, journal editors should commit to publishing only those articles that pay respondents an ethical rate, and university ethics boards should create guidelines for use of crowdsourcing that consider wages and also crowdsourcers’ lack of access to basic employment protections.

The alternative is continuing to pay below-minimum-wage rates to a substantial number of poor people who rely on this income for their basic needs. This is simply no alternative at all.

Image Source: © Romeo Ranoco / Reuters
      
 
 




crowdsourcing

Can crowdsourcing be ethical?


This post originally appeared on the TechTank blog.

In the course of my graduate work at Harvard University, I paid hundreds of Americans living in poverty the equivalent of about $2 an hour. It was perfectly legal for me to do so, and my research had the approval of my university’s ethics board. I was not alone, or even unusual, in basing Ivy League research on less-than-Walmart wages; literally thousands of academic research projects pay the same substandard rates. Social scientists cannot pretend that the system is anything but exploitative. It is time for meaningful reform of crowdsourced research.

This is what crowdsourced research looks like. I posted a survey using Mechanical Turk (MTurk), a website run by Amazon.com. Across the country, hundreds of MTurk workers (“turkers”) agreed to fill out the survey in exchange for about 20 cents apiece, and within a few days I had my survey results. The process was easy, and above all, cheap. No wonder it is increasingly popular with academics; a search on Google Scholar returns thousands of academic papers citing MTurk, increasing from 173 in 2008 to 5,490 in 2014.

Mechanical Turk is a bargain for researchers, but not for workers. A survey typically takes a couple minutes per person, so the hourly rate is very low. This might be acceptable if all turkers were people with other jobs, for whom the payment was incidental. But scholars have known for years that the vast majority of MTurk tasks are completed by a small set of workers who spend long hours on the website, and that many of those workers are very poor. Here are the sobering facts:

  • About 80 percent of tasks on MTurk are completed by about 20 percent of participants that spend more than 15 hours a week working on the site. MTurk works not because it has many hobbyists, but because it has dedicated people who treat the tasks like a job.
  • About one in five turkers are earning less than $20,000 a year.
  • A third of U.S. turkers call MTurk an important source of income, and more than one in ten say they use MTurk money to make basic ends meet.

Journal articles that refer to Mechanical Turk. Source: PS: Political Science and Politics

It is easy to forget that these statistics represent real people, so let me introduce you to one of them. “Marjorie” is a 53-year-old woman from Indiana who had jobs in a grocery store and as a substitute teacher before a bad fall left her unable to work. Now, she says, “I sit there for probably eight hours a day answering surveys. I’ve done over 8,000 surveys.” For these full days of work, Marjorie estimates that she makes “$100 per month” from MTurk, which supplements the $189 she receives in food stamps. Asked about her economic situation, Marjorie simply says that she is “poverty stricken.”

I heard similar stories from other MTurk workers—very poor people, often elderly or disabled, working tremendous hours online just to keep themselves and their families afloat. I spoke to a woman who never got back on her feet after losing her home in Hurricane Rita, and another who had barely escaped foreclosure. A mother of two was working multiple jobs, plus her time MTurk, to keep her family off government assistance. Job options are few for many turkers, especially those who are disabled, and MTurk provides resources they might not otherwise have. But these workers that work anonymously from home are isolated and have few avenues to organize for higher wages or other employment protections.

Once I realized how poorly paid my respondents were, I went back and gave every one of my over 1,400 participants a “bonus” to raise the survey respondent rate to the equivalent of a $10 hourly wage. (I paid an additional $15 to respondents who participated in an interview.) This cost me a little bit more money, but less than you might imagine. For a 3-minute survey of 800 people, going from a 20-cent to a 50-cent payment costs an additional $240. But if every researcher paid an ethical wage, it would really add up for people like Marjorie. In fact, it would likely double her monthly income from MTurk.

Raising wages is a start, but it should not be up to individual researchers to impose workplace standards. In this month’s PS: Political Science and Politics, a peer-reviewed journal published for the American Political Science Association, I have called for new standards for crowdsourced research to be implemented not only by individual researchers, but also by universities, journals, and grantmakers. For instance, journal editors should commit to publishing only those articles that pay respondents an ethical rate, and university ethics boards should create guidelines for use of crowdsourcing that consider wages and also crowdsourcers’ lack of access to basic employment protections.

The alternative is continuing to pay below-minimum-wage rates to a substantial number of poor people who rely on this income for their basic needs. This is simply no alternative at all.

Image Source: © Romeo Ranoco / Reuters
      
 
 




crowdsourcing

Accumulating Evidence Using Crowdsourcing and Machine Learning: A Living Bibliography about Existential Risk and Global Catastrophic Risk

The study of existential risk — the risk of human extinction or the collapse of human civilization — has only recently emerged as an integrated field of research, and yet an overwhelming volume of relevant research has already been published. To provide an evidence base for policy and risk analysis, this research should be systematically reviewed. In a systematic review, one of many time-consuming tasks is to read the titles and abstracts of research publications, to see if they meet the inclusion criteria. The authors show how this task can be shared between multiple people (using crowdsourcing) and partially automated (using machine learning), as methods of handling an overwhelming volume of research.




crowdsourcing

Accumulating Evidence Using Crowdsourcing and Machine Learning: A Living Bibliography about Existential Risk and Global Catastrophic Risk

The study of existential risk — the risk of human extinction or the collapse of human civilization — has only recently emerged as an integrated field of research, and yet an overwhelming volume of relevant research has already been published. To provide an evidence base for policy and risk analysis, this research should be systematically reviewed. In a systematic review, one of many time-consuming tasks is to read the titles and abstracts of research publications, to see if they meet the inclusion criteria. The authors show how this task can be shared between multiple people (using crowdsourcing) and partially automated (using machine learning), as methods of handling an overwhelming volume of research.




crowdsourcing

Accumulating Evidence Using Crowdsourcing and Machine Learning: A Living Bibliography about Existential Risk and Global Catastrophic Risk

The study of existential risk — the risk of human extinction or the collapse of human civilization — has only recently emerged as an integrated field of research, and yet an overwhelming volume of relevant research has already been published. To provide an evidence base for policy and risk analysis, this research should be systematically reviewed. In a systematic review, one of many time-consuming tasks is to read the titles and abstracts of research publications, to see if they meet the inclusion criteria. The authors show how this task can be shared between multiple people (using crowdsourcing) and partially automated (using machine learning), as methods of handling an overwhelming volume of research.




crowdsourcing

Accumulating Evidence Using Crowdsourcing and Machine Learning: A Living Bibliography about Existential Risk and Global Catastrophic Risk

The study of existential risk — the risk of human extinction or the collapse of human civilization — has only recently emerged as an integrated field of research, and yet an overwhelming volume of relevant research has already been published. To provide an evidence base for policy and risk analysis, this research should be systematically reviewed. In a systematic review, one of many time-consuming tasks is to read the titles and abstracts of research publications, to see if they meet the inclusion criteria. The authors show how this task can be shared between multiple people (using crowdsourcing) and partially automated (using machine learning), as methods of handling an overwhelming volume of research.




crowdsourcing

Accumulating Evidence Using Crowdsourcing and Machine Learning: A Living Bibliography about Existential Risk and Global Catastrophic Risk

The study of existential risk — the risk of human extinction or the collapse of human civilization — has only recently emerged as an integrated field of research, and yet an overwhelming volume of relevant research has already been published. To provide an evidence base for policy and risk analysis, this research should be systematically reviewed. In a systematic review, one of many time-consuming tasks is to read the titles and abstracts of research publications, to see if they meet the inclusion criteria. The authors show how this task can be shared between multiple people (using crowdsourcing) and partially automated (using machine learning), as methods of handling an overwhelming volume of research.




crowdsourcing

Accumulating Evidence Using Crowdsourcing and Machine Learning: A Living Bibliography about Existential Risk and Global Catastrophic Risk

The study of existential risk — the risk of human extinction or the collapse of human civilization — has only recently emerged as an integrated field of research, and yet an overwhelming volume of relevant research has already been published. To provide an evidence base for policy and risk analysis, this research should be systematically reviewed. In a systematic review, one of many time-consuming tasks is to read the titles and abstracts of research publications, to see if they meet the inclusion criteria. The authors show how this task can be shared between multiple people (using crowdsourcing) and partially automated (using machine learning), as methods of handling an overwhelming volume of research.




crowdsourcing

Accumulating Evidence Using Crowdsourcing and Machine Learning: A Living Bibliography about Existential Risk and Global Catastrophic Risk

The study of existential risk — the risk of human extinction or the collapse of human civilization — has only recently emerged as an integrated field of research, and yet an overwhelming volume of relevant research has already been published. To provide an evidence base for policy and risk analysis, this research should be systematically reviewed. In a systematic review, one of many time-consuming tasks is to read the titles and abstracts of research publications, to see if they meet the inclusion criteria. The authors show how this task can be shared between multiple people (using crowdsourcing) and partially automated (using machine learning), as methods of handling an overwhelming volume of research.




crowdsourcing

Accumulating Evidence Using Crowdsourcing and Machine Learning: A Living Bibliography about Existential Risk and Global Catastrophic Risk

The study of existential risk — the risk of human extinction or the collapse of human civilization — has only recently emerged as an integrated field of research, and yet an overwhelming volume of relevant research has already been published. To provide an evidence base for policy and risk analysis, this research should be systematically reviewed. In a systematic review, one of many time-consuming tasks is to read the titles and abstracts of research publications, to see if they meet the inclusion criteria. The authors show how this task can be shared between multiple people (using crowdsourcing) and partially automated (using machine learning), as methods of handling an overwhelming volume of research.




crowdsourcing

Accumulating Evidence Using Crowdsourcing and Machine Learning: A Living Bibliography about Existential Risk and Global Catastrophic Risk

The study of existential risk — the risk of human extinction or the collapse of human civilization — has only recently emerged as an integrated field of research, and yet an overwhelming volume of relevant research has already been published. To provide an evidence base for policy and risk analysis, this research should be systematically reviewed. In a systematic review, one of many time-consuming tasks is to read the titles and abstracts of research publications, to see if they meet the inclusion criteria. The authors show how this task can be shared between multiple people (using crowdsourcing) and partially automated (using machine learning), as methods of handling an overwhelming volume of research.




crowdsourcing

Crowdsourcing your bottom line

Internet lingerie startup "Adore Me" aims to disrupt the U.S. lingerie market. One strategy their using, is to crowdsource the designs they bring to market.




crowdsourcing

Latest News: Rosa Parks Crowdsourcing Project

By the People, the Library of Congress’ crowdsourced transcription project powered by volunteers across the country is launching a campaign to transcribe Rosa Parks’ personal papers to make them more searchable and accessible online, including many items featured in the exhibition, “Rosa Parks: In Her Own Words,” starting today, the 107th anniversary of her birth.

Click here for more information.




crowdsourcing

Latest News: New Crowdsourcing Effort

The Library’s crowdsourcing initiative By the People has launched its newest campaign to enlist the public’s help to make digital collection items more searchable and accessible online. Herencia: Centuries of Spanish Legal Documents includes thousands of pages of historical documents in Spanish, Latin and Catalan.

As the first entirely non-English crowdsourced transcription project by the Library, this campaign will open the legal, religious and personal histories of Spain and its colonies to greater discovery by researchers, historians, genealogists and lifelong learners.

Click here for more information.




crowdsourcing

Updates from the Veterans History Project (VHP): LOC Crowdsourcing Project Transcribes Civil War Veterans’ Letters

This Veterans Day, learn from veterans of the past – by helping the researchers of tomorrow.

The Library of Congress holds many collections that touch on the lives and service of military personnel; and the human cost of war. Although American Civil War materials fall outside the scope of the Veterans History Project, we encourage you to hone your transcription skills using wartime correspondence. The recently launched crowdsourcing project, crowd.loc.gov/, contains three collections relating to the Civil War: letters written to Abraham Lincoln, the diaries of Clara Barton, founder of the American Red Cross, and the papers of disabled veterans' advocate William Oland Bourne (1819-1901), a reformer, poet, clergyman, and editor of the Soldiers Friend journal. 

The mission of the Veterans History Project of the Library of Congress American Folklife Center is to collect, preserve and make accessible the personal accounts of U.S. veterans so that future generations may hear directly from veterans and better understand the realities of war. Learn more at http://www.loc.gov/vets. Share your exciting VHP initiatives, programs, events and news stories with VHP to be considered for a future RSS. Email vohp@loc.gov and place “My VHP RSS Story” in the subject line.

Visit VHP on Facebook.

Click here for more information.