Scary stuff, but something I think we’re already deeply mired in: Physiognomy’s New Clothes (the new racism, courtesy of machine learning).
Reminds me of the book Weapons of Math Destruction, which also highlighted many important points about the problems with “runaway” algorithms, which not only face the danger of falling into a closed feedback loop (and thus feeding their native biases), but also where the builders of the algorithm are no longer around to ensure the algorithm’s still behaving according to theory and can no longer validate its results qualitatively.
What is more, having worked on many data science projects, I know how easy it is to build models that can be tweaked to say anything I want by just tweaking a couple of parameters.
And let’s just say that models that don’t quite agree with management’s decree don’t always see the light of day.
(Link to article above “Physiognomy’s New Clothes” via the wonderful Marginal Revolution blog, which also highlighted the fact that this was “neglected” — I personally am finding myself increasingly leaning toward the AI doomsayers. The more I know, the more I worry.)