Story and Strategy Blog

 RSS Feed

  1. Okay, so that’s a bold statement, and one that is hard to verify, but it was the persistent voice in my head as I listened to Hannah Fry’s@FryRsquared excellent lecture today at the RSA.

    AI’s failings are well documented, from the short-sightedness of some algorithmic design, or the unconscious biases that play out in extremely dangerous ways for many, or just careless coding and bugs. All of these are undoubtedly important but have been covered at length elsewhere. What struck me from what Ms Fry was saying was it is our deep-seated and seemingly contradictory urges to both unquestioningly rely on technology and irrationally reject it for not being perfect, that is the risk. And these are (very) human failings.

    The examples used in the lecture (and in her book Hello World) range from the common trust in SatNav systems even when common sense shows we are going the wrong way, to the far more serious failure of common sense to over-rule the sentencing of criminals based on faulty algorithmic logic assessing likeliness to re-offend (see Brooks vs State of Virginia in Fry’s book).

    Unlike currently popular dystopian views of AI, Fry’s work looks at how we can work with AI to make a better future, and what we need to do ourselves to make it happen. In my mind, her presentation suggested three interlocking questions that need to be addressed to make sure we take responsibility for AI.

    • How should we (humans) manage AI?
    • How should we decide (as businesses, society and individuals) where the boundaries should be established?
    • And, finally – how do we need to change our behaviour to make the best of a world of AI?

    Many bigger brains than mine will doubtless discuss these for years to come, but I do think there is a common thread that must form the basis of answers. We need to find better language and better stories to help people understand what these technologies do in their lives and why.

    As noted by several in the audience, the trouble with AI is it just sounds so scary. Artificialmeans ‘not natural’, imitation, false, insincere and seems to seek to deceive. Intelligence is itself cold and unemotional. Together it is unsurprising that they conjure images of a cold and unhuman future – even without reference to The Terminator!

    But alternatives like Intelligent Assistance (IA) as suggested from the audience today, or Decision Support Systems (around since the 1970’s) are almost as bad and unlikely to catch on. With just the mention of ‘AI’ purported to add 10% to company valuations it seems are stuck with it.

    But can we temper that phrase by providing better context and more inclusive stories that help people to understand what is happening and how it helps them? Technologists have a bad habit of creating their own language that is both complex and arcane to the rest of the world. Yes, undoubtedly, they need to convey complicated things, but as Fry demonstrated today – with a bit of thought and effort even complex processes can be explained in accessible ways.

    To answer the questions above we need a more inclusive, sober and well-informed debate across all segments of society to ensure that the answers are effective, rational and acceptable to the majority.

    In order to have this debate the AI industry (in all its forms) needs to start creating augments, proofs and examples in language understood by the masses. This means stepping away from product (or even commercially) centric communications towards stories that address the context, values and concerns of populations impacted by those products.

    The danger in not leaving the tech bunker, is that AI innovators will find the debate increasingly toxic and the attentions of regulators ever more onerous. AI has an important role to fulfil in improving our lives, societies and economies in many ways. But this cannot happen without articulating that benefit in ways that are convincing and relevant. Let’s re-tell the AI story with humans as the heroes.