Recent academic and journalistic reviews of online web services have revealed that many systems exhibit subtle biases reflecting historic discrimination. Examples include racial and gender bias in search advertising, image recognition services, sharing economy mechanisms, pricing, and web-based delivery. The list of production systems exhibiting biases continues to grow and may be endemic to the way models are trained and the data used.
At the same time, concerns about user autonomy and fairness have been raised in the context of web-based experimentation such as A/B testing or explore/exploit algorithms. Given the ubiquity of this practice and increasing adoption in potentially-sensitive domains (e.g. health, employment), user consent and risk will become fundamental to the practice.
Finally, understanding the reasons behind predictions and outcomes of web services is important in optimizing a system and in building trust with users. However, it also has legal and ethical implications when the algorithm has an unintended or undesirable impact along social boundaries.
The objective of this full day workshop is to study and discuss the problems and solutions with algorithmic fairness, accountability, and transparency of models in the context of web-based services.