AI and Ethics




  • having ai behave ethically. what is the standard? if you are asking to behave like a human… what is the standard? i have seen a lot of ethical tests that people operate on a spectrum based on a lot of criteria. is there going to be a organization that comes up with some rules to adhere to and enforce them?

  • i work in security and part of that is application security. to try and get a company to write secure code is seen as a cost avoidance it contributes no value to product and only serves to delay the release of the product. I see writing ethics into the solutions similar.

  • I have to admit that it was difficult for me to continue to listen to this discussion past Margaret Boden's opener (I persevered, but found it to be a jarring departure from the tone of most of the other BAI 2017 discussions).

    I honestly don't understand why many "ethicists" seem to spend so much of their time and devote so much of their energy to embedding themselves within traditional ways of thinking and dogmatic ideas that they can't think creatively about novel ethical problems. She was totally confident that building robots to interact with or act as caretakers for the elderly was an assault on human dignity. This is not the sort of mindset that is conducive to asking mind-expanding questions (i.e. "how could we develop systems or incentives that would explore these possibilities in ethical ways?") – she was just ruling things out left and right out of the box.

    I frankly find it terrifying to hear this kind of dogmatic thinking from a person who should be leading open ended discussion rather than shutting it down. To claim with a straight face at a conference like this that you know no machine can ever be built that is capable of taking responsibility or making moral choices for itself and to not be met with immediate criticism is absolutely flabbergasting to me. It seems that she doesn't believe humans to be the biological soft machines that we most likely are, but that we are made of magic "soul-stuff" or something of that sort that sets us apart.

    I have to wonder if she understands anything about Turing's thesis and the notion of substrate independence. It's valid to ask whether this property which ostensibly applies to universal Turing machines (computers) would also apply to conscious machines (i.e. is consciousness a wholly contained subset of all UTM's?) but to categorically rule it out is premature at best and dangerously narrow-minded at worst. To claim this is tantamount to saying that humans are imbued with "magic soul-imbuing agency dust"… she might as well have said she's sure we have souls and she knows machines can never have them.

    This kind of thinking – coming from people who fancy themselves to be serious philosophers is what is going to get us (or more precisely many of us) killed in the future. I'm all for being careful, skeptical, and wary about projecting or anthropomorphizing our inventions and artifacts in illegitimate ways. There are plenty of risks to be concerned about (such as what was demonstrated in the plot of Ex Machina for example) – but this position that only humans are moral is a dogmatic and narrow-minded over-correction on the part of the people who espouse it.

  • Your arguments and analysis regarding AI is guaranteed to be nonsense because philosophers have no reliable answers regarding consciousness or free will. Without those preliminaries firmly in place, subsequent analysis (like ethics) is not only nearly guaranteed to be nonsense, but it also playing reckless with the rights of others to the point of fraud, criminal damage to other people's liberties, and may even be indicative of severe NPD in which case we need to completely defund your respective universities because you are progressing in the refinement of rubbish.

Leave a Reply