The Ethics of AI in Warfare | Lecture




  • I'm a big fan Ollie however your response to the "one good thing" question was, i think, a bit flippant. There's much more to ai so i think you kind of missed the point a bit. Have you considered that ai can teach us about ourselves, how we function and about existence itself? AlphaGo beat Go grandmaster Ke Jie in ways that were unimaginable by the human brain. The ai created new ways to strategize the game and introduced Ke Jie and the Go playing world to new ideas. Clearly much more than fast, cheap, efficient.

  • Finally got around to watching this lecture, and I've gotta say, as an AI engineer, I have loved every single topic you've covered when it came to AI. It's stuff that I hadn't even necessarily thought about myself, and stuff that other AI engineers, even well-meaning ones, tend to have a kneejerk defensive reaction towards. We need people from the exterior to criticize us, not just because it's ethically necessary but also because critical thinking is how science evolves.

  • Great lecture! It really is useful to use words that reflect more truthfully the nature of AI decisionmaking.
    That one clearly obliviously sexist semantic nitpicker in the audience sure was something. lol

  • Holy shit, that 'less of a question more of a comment' person was painful to listen to. Like we get it, you read an article, you read a book once about psychology , you don't have to bring em up every time you're beginning to lose an argument. Olly you did a swell job of trying to get to the root of what he was trying to say. Really maturely dealt with.

  • Thankyou for this Olly, I appreciate not only the content but the fact that the lecture gave context to the questions it was asking with real life and political examples. A lot of people who debate about issues like these do it from a detached, purely logical point of view that ignores the world around them and social power dynamics.

  • Jeez, that person who spent what felt like 10 minutes explaining how implicit bias doesn't exist (and maybe that people shouldn't pay attention to gender bias too? I couldn't tell if it was the same person?) just… dismissed your arguments as semantic and then went on to make even more purely-semantic arguments. "It's not bias, it's just a mistake. So therefore, by calling it a mistake, I can deflect criticism if I make a mistake".
    Who wants to bet they get into online arguments with people who try and point out all sorts of implicit biases, like areas targeted by police, gender pay gap, poor mental health medical care, or the way organisations like SSI and the DWP mistreat disabled people? Do they really think that redefining pervasive issues as "pure mistakes", and therefore removing accountability, is enough of a solution? How would they explain the reoccurring nature of these "mistakes", or the failure to act on reoccurring mistakes once they are noticed? Like… if a government "accidentally" kills a bunch of people through its police forces… shouldn't they try and figure out how those accidents were allowed to occur in the first place?
    Lots of people who dismiss sociology like to think they're big buffs on the scientific method. But the same kind of statistical analysis that points out implicit and inherited biases, are what's used to verify scientific discovery, or to make patterns emerge from seemingly random data (like particle searching, or exoplanet searching).
    If someone makes a mistake when running an experiment, the people running the experiments try and figure out how the mistake happened, and if it's possible to make it so it simply cannot happen again in the future. Many issues NASA runs into are first-time encounters, which then get remedied once they know that it can be an issue. No one accepts a Space Shuttle exploding in the air because someone mistakenly left a bad gasket on the fuel line. There is a multi-year investigation on how this mistake was allowed through the process, and no more Space Shuttles are allowed to fly in the mean time. Allowing them to do so would be negligent and could cause preventable deaths โ€“ just like allowing these "AI"* to run areas riddled with bias which will come up in the training data.
    (*Just putting AI in scare quotes there because although neural networks are big and exciting news, they're simply the latest thing in a long string of things that were at one time called AI โ€“ like fuzzy logic โ€“ that is now merely a regular old feature in a washing machine or a coffee maker. No one boasts of having "an AI tumble drier" because it has moisture sensors and can run until the clothes are as dry as desired and no further. Even the term "fuzzy logic" doesn't come up very much about such things anymore. They're just there. No one is ascribing any personhood to an automatic tumble drier like people currently are with autonomous vehicles.)

  • Bias in the algorithm is only a problem if it actually hinders the function of the algorithm. If the Smart policing is facilitating the police being better able to catch criminals, then it is doing it's job, no matter how biased it may be.

  • I think it's wrong that your country has done away with the death penalty. It is for just such occasions as you describe that the death penalty should be at minimum reserved.

    But beyond that, If someone breaks into your house, armed and dangerous, You are perfectly at liberty to kill them, without any trial at all, as the intent is clear that they will use deadly force against you if you try to stop them from robbing you as you also have the right to do. If a terrorist cell planned to set off a dirty bomb, and you discovered them, you would be perfectly in the right to kill them to prevent such disaster. in fact whenever your police arrest anyone for any crime, it is ultimately backed up with a threat of deadly force if they do not comply with the officer's demands.

    So on what authority do we kill foreigners? On the authority granted when they present a legitimate threat to our citizens at home or our soldiers who are otherwise acting peacefully in those countries.

  • This is so fascinating! Also, you were great at handling the student who was trying to debate you. I couldn't have done it, I was rolling my eyes too hard. And I had to pause so I could scream when he said he thought it was funny that the AI mistook black people for gorillas. I think he probably inadvertently helped you out by showing his hand so soon. Anyway, great job, you really have a way with words.

  • the unconscious bias research has very little actual evidence. The people who made the implicit association test now deny it has any experimental validity.

  • I have a kind critic. "Unsupervised" is a well-known technical term in AI, when you use that word, even if your context is different than where the technical term would apply you're still talking about AI and AI algorithm and it sounds really wrong to someone with some background. Not only because it is off from the usual context but also because you used to refer to an algorithm that is definitely not unsupervised, all card-driving algorithms are supervised, maybe reinforced at most, but never unsupervised by the nature of the problem.

    Also about talking to technicians in the field, If you want to be more pedantic then this is important: the algorithm you said could be biased isn't. You may have algorithms that use other algorithms, you have algorithms that solve an information/data problem and algorithms that solve an user problem, an application. The ZIP is an algorithm to compress generic data, your browser and youtube use it to download videos faster, the two algorithm purposes are different. Algorithms that find trending words are neutral, they just apply statistics, the application uses it plus an filter algorithm to create your scenario. You should say the application is biased although the internal AI algorithm isn't.

    Overall your video is great, I love to listen to the insights of people from different areas, specially Philosophy, about AI.

  • This was a GREAT, GREAT, GREAT lecture! Thank you so much for recording it and posting it! Also, it's nice to see you have the opportunity to do a lecture like this!

  • When answering a student in the QA period of this lecture, you asked whether the student thought racism is about intent, and as this is something I myself have done a lot of thinking about, I'd absolutely love it if you made a video discussing the irrelevancy of intent when it comes to (anything, really, but in particular) bigotry.
    If you know of some books I could read on this, that'd also be very much appreciated, because I don't really know how to find any on that topic myself.

    Whether you end up replying to this, making said video, neither or both, please keep up your good work, I have loved every video you've made since I subscribed.

  • oh yay! A presentation video where they actually have subtitles! I'm not exactly hard of hearing, but the echo these always have (never understand why they don't record directly from the microphone) makes it difficult to tell what's being said at times

  • Great lecture — glad there are subtitles! I plan to share this.
    The issue of legal vs moral is interesting, because I find that all too often people accept laws as though they are "written in stone" and unchangeable. But we had to devise the laws in the first place. And unless you live under a dictatorship of one sort or another, the laws are still changeable. So any argument against a moral objection with "but it's lawful" is flawed from the start, because a law that fails morally is clearly one that needs to be addressed and altered or removed to better suit the morals of the society. And now society tends to encompass the whole globe.
    So there are ZERO excuses for laws that allow (or encourage) immoral outcomes. And there is no adequate defense.

    Ah, see we had a few faux-intellectuals at the end there with exceedingly leading questions that still failed to produce the results they were looking for.

  • I enjoyed many many bits of this lecture, and especially the care you were taking to point out all the lecturers were men, then most of the Q&A question-havers, etc. That was great! Aiming for gender diversity and anti-sexist action = YES!

    Just wanted to suggest a consideration for the future, however: you don't know from looking at anyone what their gender is, so it was a bit of an assumption when you said they were all dudes. Though it's unlikely, please consider how it might have felt if one of them actually identified as, say, a pre-transition trans woman, or a non-binary person, but only appeared "dude" to you (this type of incorrect gender assumption used to happen to me regularly in similar public fora). I think just acknowledging that complexity out loud while also encouraging women & NBs to speak would've been a good alternative. Love your channel – thanks for reading!

  • about the woman that was run over by an autonomous car. what she was doing was crossing road holding a bike in the middle of the night with no reflective gear. did she look right? i dont think o because she would have seenthe car nd waited.

  • The clear distinction between the 'can we put our minds together and sort out this issue i see' and 'let me take this time to tell you that you're wrong' type questions was interesting to see.

    Also an interesting example of perhaps the core issue of AI, that lead-up to that last question. (The "from other genders" part)
    – task at hand: select between and answer questions.
    – (at best) tangibly related observation: question-askers are disproportionally dude-heavy
    – possible causes: either 'the women in this room don't have questions' (possible, but unlikely) or 'the women in this room are less likely to speak up if they do' (history might be a factor here)
    – action taken: '(explicitly) compensate for the latter'
    This is exactly the type of 'sees indication of potential problem, just outside the scope of the task and acts on it' that AI wouldn't do.

  • Holy Shit, is that Jordan Peterson in the audience??? When I heard "Do I have to apologize for my gender?", I was pretty damn sure that Peterson was in attendance. Bravo, Olly, putting up with him with gusto. I thought you gave an informative and thought-provoking lecture, but you've never disappointed me yet in this regard, so keep up the great work! Also, you looked pretty damn dapper in that suit! Keep rockin' it!

  • I really enjoyed this video, but I wonder if people might be inclined to ignore the YouTwitFace thought experiment because of the way our culture has built up the Appeal to Authority relating to technology. There's an idea that social media is created by 'very smart people' and programming languages are somehow different from if/then statements given in natural language, so people think that bias can somehow be magically overcome in a way that non-programmers are just too ignorant to understand – and sadly, a number of programmers think this too!

Leave a Reply