so:text
|
Remember Bostrom’s definition of existential risk, which refers to the annihilation not of human beings, but of “Earth-originating intelligent life.” The replacement of our species by some other form of conscious intelligent life is not in itself, impartially considered, catastrophic. Even if the intelligent machines kill all existing humans, that would be...a very small part of the loss of value that Parfit and Bostrom believe would be brought about by the extinction of Earth-orginating intelligent life. The risk posed by the development of AI, therefore, is not so much whether it is friendly to us, but whether it is friendly to the idea of promoting wellbeing in general, for all sentient beings it encounters, itself included. (en) |