Apple, Google, Microsoft or Amazon – which of these tech giants will help you live your life and spend your money? Whose AIs will you trust?

  • Google has Google Home, a hands free smart speaker which will be able to answer questions supported by advances in translation and image recognition.
  • Microsoft hopes to dominate the business space.
  • Apple has the HomePod to be launched in December and is investing in emotion detecting technology
  • Amazon has Alexa which will on request provide access to goods and services with more to come.

And according to an article in the September 2017 edition of Wired, authored by Liat Clark, Amazon is the front-runner. Whereas Google can provide information, Amazon can bring you things! Google Home is the smart friend at a party whereas Alexa is a benign butler. According to Liat Clark …

Amazon wants to introduce Alexa into every area of your life: your home, car, hospital, workplace. The ‘everything’ store is about to be everywhere. Alexa has to be human like because it is essential that people trust her, enough to let visual and audio ‘surveillance’ into their homes ad lives. Alexa can try to empathise with words alone at the moment but when she has cameras at her disposal she will be able to respond to visual clues as well as aural input. And in response Alexa is becoming more human like. Alexa can whisper, pause, take a breath, adjust its pitch and allow for key words such as ‘ahem’ and ‘yay’ to be emphasised in more engaging ways. Forging an apparently ‘emotional’ response from Alexa is the goal. An AI will need to know a person well to engage in a relationship based on emotional response. Amazon may well know more about you than your closest friends and so, of course, will Alexa and be able to use both what you say and do to forge, maintain and extend that relationship. The insightful film Robot and Frank asked the question, “Can an AI be your friend?” Amazon has the answer, “Of course, if you trust the AI as you might another human.” And that is Amazon’s overriding intention – to get us to trust Alexa as we might a human friend in the knowledge that she is not in fact another human and hence will not pry into your life or betray you as a human friend might.

Of course Jeff Bezos (and the CEOs of other tech giants) are constructing cathedrals of capitalism where they intend consumers to come to worship and offer up as sacrifice their wages in return for the goods and services recommended and provided by AIs they trust. But here there is a supreme irony. The very same AIs that are the heart of this new faith are also being deployed to automate many of the functions the worker-worshippers utilise to earn the wages they need to live out their consumerist lives. AIs may be simultaneously the engine of capitalism and its doom. What are we to make of this conundrum? Surely it is worth discussing with the young people whose lives will be most affected by this impact of technology on society and society’s response. And where better to do this than in design & technology lessons.

As always comments welcome.

The Importance of Technological Perspective. Or; It’s no longer OK not to understand how the Internet works.

We’ve mentioned a few times, often in the context of our Disruptive Technologies work, how important we believe it is that a part of the work of D&T in schools should be to enable young people to gain ‘Technological Perspective’. David has described this as:

(that) which provides insight into ‘how technology works’, informing a constructively critical view of technology, avoiding alienation from our technologically-based society and enabling consideration of how technology might be used to provide products and systems that help create the sort of society in which young people wish to live.

Events following the awful attacks, first in Manchester and then in London last Saturday night, have brought home to me just how important this is, as these young people will be the future decision-makers and leaders of our society – and they simply must be equipped to do a better job than our current leaders.

I’m sure you’ll have seen that, in response to the attacks, there has (once again) been an attempt to blame the Internet and a call from Theresa May for the ending of ‘safe spaces’ for terrorists on the Internet. Given that this was a thrust of government policy before the attack, it’s hard not to see this as an opportunistic attempt to shore up that policy, but perhaps now is not the time for cynicism.

It is however the time for a clear-eyed analysis of what it would mean to end safe spaces on the internet. In case you are tempted to think that that sounds a pretty good idea, I offer you three articles that explain why it’s not just a very poor idea but in fact a rather meaningless idea – all written by people who are far more articulate on this than I can be.

The first, from The Guardian’s Charles Arthur, is ‘Blame the internet’ is just not a good enough response, Theresa May; at bottom Arthur’s argument is that banning technology is not a substitute for clear-headed policy and political action. He points out that, in the 1970’s, Northern Ireland’s terrorists got on just fine organising their plots using the ordinary telephone service (since neither mobile phones nor the internet were then available) and no-one was suggesting that in response all phone calls should be monitored. Presumably if that had happened they would simply have used other communication methods (dead-letter drops?).

Arthur notes the dystopian implications of the suggestion by John Mann (MP for Bassetlaw), who said: “I repeat, yet again, my call for the internet companies who terrorists have again used to communicate to be held legally liable for content.”, and says;

The authoritarian sweep of Mann’s idea is chilling: since legal liability is meant to deter, the companies would need people to monitor every word you wrote, every video you watched, and compare it against some manual of dissent. It’s like a playbook for the dystopia of Gilead, in The Handmaid’s Tale (which, weirdly enough, most resembles Islamic State’s framework for living).

I think the summary (but please read him for yourself) of what Arthur has to say is that:

  • Banning technologies will simply drive ‘bad actors’ to other communications means,
  • But will have highly negative effect on our own technological society,
  • Rather the focus should be on disabling the source of the ideas both internationally and at home. Arthur doesn’t say this but it seems important to note that after both the recent London and Manchester attacks it has emerged that the perpetrators had  (apparently fruitlessly) been earlier reported to the authorities for their worrying behaviour and views; such reports clearly need better responses and there needs to be supportive community work to encourage this kind of reporting by taking it seriously.

The second article, from MIT’s Technology Review, Theresa May Wants to End “Safe Spaces” for Terrorists on the Internet. What Does That Even Mean?, reinforces the third point above by noting the importance of personal contact in developing extremist ideas. This article also makes the point well that there are things that the big social networks can do and be supported in doing that fall short of asking them to monitor everything you say.

The third article is Theresa May wants to ban crypto: here’s what that would cost, and here’s why it won’t work anyway by Cory Doctorow. This more technical article explains why it is that banning ‘safe spaces’ fundamentally means undermining all internet cryptography, what the appalling costs of that would be and why it still wouldn’t stop terrorists anyway. I urge you to read the full argument, but this is the summary:

This, then, is what Theresa May is proposing:

  • All Britons’ communications must be easy for criminals, voyeurs and foreign spies to intercept
  • Any firms within reach of the UK government must be banned from producing secure software
  • All major code repositories, such as Github and Sourceforge, must be blocked
  • Search engines must not answer queries about web-pages that carry secure software
  • Virtually all academic security work in the UK must cease — security research must only take place in proprietary research environments where there is no onus to publish one’s findings, such as industry R&D and the security services
  • All packets in and out of the country, and within the country, must be subject to Chinese-style deep-packet inspection and any packets that appear to originate from secure software must be dropped
  • Existing walled gardens (like iOS and games consoles) must be ordered to ban their users from installing secure software
  • Anyone visiting the country from abroad must have their smartphones held at the border until they leave
  • Proprietary operating system vendors (Microsoft and Apple) must be ordered to redesign their operating systems as walled gardens that only allow users to run software from an app store, which will not sell or give secure software to Britons
  • Free/open source operating systems — that power the energy, banking, ecommerce, and infrastructure sectors — must be banned outright

That may sound a ridiculous set of things to conclude; just read the full article.

And then, please, find ways to discuss these things with the young people in your schools; make sure they, at least, do understand how the technologies around them, including the Internet, work. Having well-informed technological perspective really does matter.