Project Sherpa looks to Address the Ethical and Human Rights issues of Smart Information Systems
With the rise of Smart Information Systems, ie AI, the challenges of living with the implications of these systems is poorly understood. The quality of the information produced is limited by the design of the systems, the data that is used and the context in which the data lives. Project Sherpa looks to work with industry leaders to establish an advocacy group that seeks to address the most challenging problems. As they put it, “The SHERPA project is uniquely placed to guide the ongoing debate, focus it and develop actionable recommendations and advocate them to ensure that SIS promote the public good.”
The folks at F-Secure who are involved with the project list several common flaws with many AI projects:
Design
Input features selected by the model’s designer are signal-poor, irrelevant, or introduce bias.
Model overfits training data, causing it to fail on generalizing real world inputs.
Model architecture or parameters are incorrect, causing it to be inaccurate.
Data
Model fails to generalize on real-world inputs because it was trained with insufficient data.
Model contains bugs due to mislabeled samples in the training data.
Biases or assumptions are introduced by flawed training data. (Many real-world datasets contain inherent human bias).
Utilization
Model is attempting to do something it can’t or shouldn’t.
Design assumptions do not hold in a real-world context.
Working with data is challenging. Not everyone building these systems has a good grasp of the technical challenges of working with data, and the implications that arise from its misuse. If you want to see an elegant discussion of data and statistics, check out the Hans Rosling’s book from last year, Factfulness, or his Gapminder website.