Fourth Symposium on
Biases in Human Computation
and Crowdsourcing

12-13 October, Online Event

About

BHCC

Human Computation and Crowdsourcing have become ubiquitous in the world of algorithm augmentation and data management. However, humans have various cognitive biases that influence the way they make decisions, remember information, and interact with machines. It is thus important to identify human biases and analyse their effect on complex hybrid systems. On the other hand, the potential interaction with a large pool of human contributors gives the opportunity to detect and handle biases in existing data and systems.

The goal of this symposium is to analyse both existing human biases in hybrid systems, and methods to manage bias via crowdsourcing and human computation. We will discuss different types of biases, measures and methods to track bias, as well as methodologies to prevent and solve bias.

An interdisciplinary approach is often required to capture the broad effects that these processes have on systems and people, and at the same time to improve model interpretability and systems’ fairness.

We will provide a framework for discussion among scholars, practitioners and other interested parties, including industry, crowd workers, requesters and crowdsourcing platform managers. We expect contributions combining ideas from different disciplines, including computer science, psychology, economics and social sciences.

Submit

Overview

We welcome the submission of research papers and abstracts which describe original work that has not been submitted or currently under review, has not been previously published nor accepted for publication elsewhere, in any other journal or conference.

Submissions of the research papers must be in English, in PDF format, and be in the current CEUR-WS single-column conference format.

< We will follow CEUR-WS guidelines, meet their preconditions and expect to get the proceedings published. However, note that there is no guarantee that our volume will get published at CEUR-WS.

It is also possible to opt-out from publication by sending an email to the organizers.

  • We welcome the submission of the following types of contributions:
    • Full papers should be at most 10 pages in length (including figures, tables, appendices, and references);
    • Short papers should be at most 5 pages in length (including figures, tables, appendices, and references);
    • Abstracts should be at most 1 page in length (including figures, tables, appendices, and references), should contain just a title and the abstract, and should detail demos or relevant work or ideas which are under development. They can not contain references.
  • Topics of interest include (but are not limited to):
    • Biases in Human Computation and Crowdsourcing
    • Human sampling bias
    • Effect of cultural, gender and ethnic biases
    • Effect of human in the loop training and past experiences
    • Effect of human expertise vs interest
    • Bias in experts vs. bias in crowdsourcing
    • Bias in outsourcing vs bias in crowdsourcing
    • Bias in task selection
    • Task assignment/recommendation for reducing bias
    • Effect of human engagement on bias
    • Responsibility and ethics in human computation and bias management
    • Preventing bias in crowdsourcing and human computation
    • Creating awareness of cognitive biases among human agents
    • Measuring and addressing ambiguities and biases in human annotation
    • Human factors in AI
    • Using Human Computation and Crowdsourcing for Bias Understanding and Management
    • Biases in Human-in-the-loop systems
    • Identifying new types of cognitive bias in data or content
    • Measuring bias in data or content
    • Removing bias in data or content
    • Dealing with algorithmic bias
    • Fake news detection
    • Diversification of sources by means
    • Provenance and traceability
    • Long-term crowd engagement
    • Generating benchmarks for bias management

Important Dates

Timezome: Anywhere on Earth (AoE)

  • Full, Short, and Abstract papers due: 1 September 2022 AoE (firm deadline)
  • Notifications: 10 September 2022
  • Conference: 12, 13, and 14 October 2022

How To

Procedure

We implement a double-blind review process. Submissions must be anonymous and the submission must be made via EasyChair: https://easychair.org/conferences/?conf=bhcc2022

We are committed to create an equal opportunity environment, without regard to race, gender identity or expression, age, disability, or any other status. For this reason, if you feel that you are in a disadvantaged situation or you require assistance please reach out to us (bhcc2022@easychair.org). We’ll be more than happy to help and allow everyone to submit a paper.

We are keen to create a fair working environment for the crowd workers and annotators. For this reason, each submission should clearly state the policies implemented to pursue this aim; each paper should be clear about the amount of work required for an annotator to submit the task, the payment, the time spent by the annotators to finish the task, and all the relevant details aimed at making clear that workers and annotators obtained a fair compensation and treatment for their work.

EasyChair

Program

BHCC 2022 Program

Detailed Program (Rome Timezone, GMT+2)

Symposium Opening

BHCC 2022 Chairs

Introduction Talk: Biases in Human Computation and Crowdsourcing

Kevin Roitero

Damiano Spina RMIT University

A Crowdsourcing Methodology to Measure Algorithmic Bias in Black-box Systems: A Case Study with COVID-related Searches

Dominik Stammback ETH Zurich

Abstractive Summarization for Explainable Claim Verification

Joel Mackenzie The University of Queensland

Exploring the Variability of Crowdworker Querying Behaviour

Virtual Coffee + Social

Break

Kevin Roitero

Germano Massullo CERN

BOINC - A platform for volunteer computing

Gianluca Demartini The University of Queensland

The Source and The Effect of Biased Human Labels on Machine Learning Decisions

Eddy Maddalena University of Udine

Qrowdsmith: gamification and furtherance incentives to enhance paid microtask crowdsourcing

Davide Ceolin Centrum Wiskunde & Informatica

Explaining Argument-based Information Quality Assessments through Crowdsourcing

Virtual Coffee + Social

David La Barbera

Matt Lease University of Texas at Austin

A Better Way to Measure Annotator Agreement for Complex Tasks

Nirmal Roy TU Delft

Users and Contemporary SERPs: A (Re-)Investigation

Symposium Closing

Symposium Opening

Danula Hettiachchi

Falk Scholer RMIT University

Measurement Scales and Crowd Assessments

Tom Lei Han The University of Queensland

Are Citizen Scientists and Crowd Workers Complementary?

Shaoyang Fan The University of Queensland

Socio-Economic Diversity in Human Annotations

Johanne Trippas RMIT University

Mastering your PhD candidature: Practical Tips

Break

Michael Soprano

Tim Draws TU Delft

Applying the Cognitive-Biases-in-Crowdsourcing Checklist

Ujwal Gadiraju TU Delft

Using Analogies and Commonsense Knowledge for Intelligible Explanations

Alessandro Checco University of Rome La Sapienza

Online communities, misinformation, and post-truth - a computational social science perspective

Elisa Cavatorta King's College London

Revealing the space for a peace agreement among parties in conflict

Virtual Coffee + Social

Kevin Roitero

Jie Yang TU Delft

Human-In-the-Loop AI: Building Trustworthy AI With People

Jordan Freitas Loyola Marymount University

Navigating Expert and Learner Bias in Crowdsourced Annotation

Symposium Closing

Venue

It will be held as an online event.

Organizers

The team behind BHCC 2022

Lorenzo Bracciale

Lorenzo Bracciale

General Chair

Kevin Roitero

Kevin Roitero

General Chair

Michael Soprano

Michael Soprano

Proceedings and Website Chair

David La Barbera

David La Barbera

Social Media Chair

Danula Hettiachchi

Danula Hettiachchi

Sponsorship Chair

Registration

Open

Standard Access
Free Of Charge

Please, fill the registration form for participating to BHCC 2022


Contact Us

Official contact information