Sun, 13 September 2015
When should we care about robots? How quickly should and will that change? These are just some of the thought points addressed by Professor David Gunkel, whose work on the moral valuations of AI is some of the first of its kind. In this interview, we consider the extent to which our “moral weighing” of other entities is arbitrary, and ask what a biased process might imply when we create other aware entities.