Ya, I think I am prolly concerned, here...I did not double-check, but I think I read that it was Max Tegmark, the physicist/cosmologist at MIT who sees the universe/reality as mathematical in nature, who has actually gotten huge grants to study a similar Q.
I took this quote out of your link: "Starting in the summer of 2018, the European Union may require that companies be able to give users an explanation for decisions that automated systems reach."
And of course, as your article explains, it is impossible to ask/discern how the machines come to their decisions. My main concern is that AI might replicate what I see as a negative direction of progress...where power, money, greed are the overriding goals, and we are all expected to subsume ourselves to those.
* * *
I actually think the loss of our empathic understanding of each other is a more immediate concern than the machines deciding we are superfluous.