Slaughterbots!

You may consider the youtube video Slaughterbots  a piece of science fiction but that would I think sell it short. I prefer to think of it as a thought experiment with regard to how swarm robots coupled face recognition software might be used as autonomous killer robots. That is robots who can decide for themselves when to kill a human target when the face recognised matches a ‘threat’ identified by those who own and control the deployment of the swarm robots. It’s easy to see this as fanciful but many serious folk are taking the possibility of autonomous killer robots very seriously. From a government’s point of view deploying robot soldiers as opposed to human soldiers has many advantages, not the least the lack of human casualties. At the moment robot soldiers of various kinds operate in collaboration with humans who have the ultimate ‘say’ with regard to a ‘kill decision’. This was explored effectively in the film Eye in the Sky Face recognition software played a significant part in the human decision to initiate a lethal strike. So Eye in the Sky to some extent endorses the thesis in Slaughterbots of the near reality of autonomous killer robots. The use of swarms of killer robots reduces the research and development costs significantly – each bot is cheap and mass manufacture is relatively inexpensive and the software guiding swarm behaviour is not that complex – as indicated in the youtube video. Where is this issue taken seriously – look no further than the Ban Lethal Autonomous Weapons website This provides a call to action and links to a campaign to stop killer robots

This is an important issue facing society and the question for us involved in teaching young people is to what extent should such an issue be explored in school? One of the justifications for teaching design & technology as part of a general education for all young people is that it introduces them to such issues and gives them the intellectual tools to think about them in a critical yet constructive way. I look to the day when such issues feature in the written examination of the recently introduced D&T GCSE. Would this be too much to ask of a GCSE introduced to reinvigorate the subject?

As always comments welcome.

Advertisements

Let there be science

The book Let there be science  by David Hutchings and Tom McLeish explores the case for Biblical support for scientific activity. I found it a fascinating although in many places I think they conflate science with technology. Rather than seeing this as a weakness I think it provides an opportunity to extend the consideration of Biblical revelation as to the nature and purpose of technology and what if anything this might have to say about the teaching and learning of design & technology in the secondary school. With these thoughts in mind I have written Let there be science – considerations from a design & technology education perspective as both commentary and critique.

My friend and colleague Torben Steeg, the very opposite of a ‘faith head’, has read the piece and raised the following comments and questions:


On page 5 you write

Those without faith might see the universe as being ‘ordered’ in this way as a result of its intrinsic nature and not through its being created by God but that seems to me to be just as much an act of faith as believing in God.

I think one might argue that it’s been the exploration of science/scientists that has revealed that the universe does appear to be ordered – for whatever reason. In that case it’s a working assumption that could be falsified; but I guess it’s a bit circular since without such an assumption the enterprise of science wouldn’t make much sense. So you could label that ‘faith’; but I don’t think it’s the same kind of thing as religious faith. (Though I’m sure some scientists operate from a faith that is more like the religious type…)

On page 6 you write

And it is echoed in the writings of Robert White (2014) a prominent geophysicist.

Natural processes such as earthquakes, volcanic eruptions, floods and the natural greenhouse effect are what make the world a fertile place in which to live. Without them, it would become a dead, sterile world and no one would be here to see it.

(page 10)

But… if you wanted to push this, why couldn’t an omnipotent god create a world (an the underlying science) where a fertile and rich environment wasn’t dependent on such things?

In your discussion of Chapter 10, (pages 8-9) it occurs to me that the notion of precautionary principle is useful – with practical examples being the original and the recent Asilomar conferences on, respectively, genetic engineering and AI.

On page 11 you write

However, the construction of the Tower of Babel (Genesis 11: 1 – 9) by which humans could reach heaven was confounded by God through the creation of multiple languages so that those building the Tower could not communicate with one another. This can be seen as a denial of technological activity when it is being used to thwart God’s purpose.

It seems to me that the Tower of Babel story is of dubious relevance; if she’s an interventionist God, why the arbitrariness of when to intervene or not? For example, why not intervene when torture or gas chambers are being built – or is she only concerned about threats to her own domain…?

But then I do think that there is a tendency for religious types to assume that God’s interventionist aims align with their own (though they would probably say that their aims align with hers…) – as when all sides in a war (or election…) pray for victory.

Nick Cave captures this nicely…

I don’t believe in an interventionist God
But I know, darling, that you do
But if I did I would kneel down and ask Him
Not to intervene when it came to you
Not to touch a hair on your head
To leave you as you are
And if He felt He had to direct you
Then direct you into my arms

(You can watch/hear the whole thing here)

I have heard it argued (persuasively to me) that the second of the Ten Commandments (You shall not use the Lord’s Name in vain) refers not to casual ‘blasphemy’ but rather to the use of phrases like ‘It’s God’s will’ to persuade folk to the opinion of the speaker.

You go on to say that:

Hence it seems that God is placing the responsibility on humanity to use technology in ways that are consistent with the covenant between God and his creation, in particular our world, the living creatures that inhabit it and the ecosystems that maintain it.

But this responsibility is given without, it seems, very clear guidance; my, admittedly casual, observation is that Christians seem to disagree about a lot of things that relate to “our world, the living creatures that inhabit it and the ecosystems that maintain it“.


Rev. Colin Davis, Rector of Carrowdore & Millisle, Church of Ireland has also read the piece and made the following comments:


It can sometimes be a popular misconception that science and faith (mostly Christian, but I guess others as well) are in opposition and yet in reality, as Tom and David indicate, this couldn’t be further from the truth. The Bible teaches that God created order out of chaos and although the Earth can often seem a very chaotic place, in fact it ‘operates’ by very definite ‘laws & principles’. Science rather than being a ‘spoiler’ (removing the mystery from nature through explanations that are arid and lacking in wonder) helps us to understand more of how things work and provides greater insight that we can use to appreciate the wonder therein. We can see Biblical writing as exploring and revealing the relationship between God and humanity and in revealing something of the nature of science and our obligation to pursue scientific activity also reveal something of the nature of God.

We know from experience and history that gifts can be used for good or ill, and seeing science as a gift from God places on us ‘the burden of responsible use’. The story of the Tower of Babel points very much to a warning for humanity to use God given gifts, including science and technology in the light of this burden rather than for us to raise our own sense of achievement without regard to God’s wishes putting humanity in the position of challenging or denying God. The futility and arrogance of such challenge/denial is captured well in this anecdote I remember from my days when training for the priesthood.

A group of successful scientists were so accomplished and confident that they thought to challenge God and create their own human being. God accepted the challenge and taking a handful of dust he created a human. The scientists bent down to grab some earth and God stopped them saying, “Get your own dust!”

God, in creating the Universe including the Earth and all creatures living on the planet wants a special relationship with humans. God loves us and wants us to love Him/Her in return and to love one another but in doing this takes a huge risk. We have a choice as to whether we love God, one another or not. The way we live our lives, treat one another and use the gifts of the creator will be determined by the choice we make. For the Christian St Paul sums this up in Chapter 12 of his letter to the Romans:

3 For I say, through the grace given to me, to everyone who is among you, not to think of himself more highly than he ought to think, but to think soberly, as God has dealt to each one a measure of faith.

4 For as we have many members in one body, but all the members do not have the same function,

5 so we, being many, are one body in Christ, and individually members of one another.

6 Having then gifts differing according to the grace that is given to us, let us use them: if prophecy, let us prophesy in proportion to our faith;

7 or ministry, let us use it in our ministering; he who teaches, in teaching;

8 he who exhorts, in exhortation; he who gives, with liberality; he who leads, with diligence; he who shows mercy, with cheerfulness.

9 Let love be without hypocrisy. Abhor what is evil. Cling to what is good.

It is not too much a stretch of the theological imagination to envisage another verse along the lines:

Or she that is scientific or technological to pursue this with due humility and regard for consequences.


As always further comments or questions welcome.

Apple, Google, Microsoft or Amazon – which of these tech giants will help you live your life and spend your money? Whose AIs will you trust?

  • Google has Google Home, a hands free smart speaker which will be able to answer questions supported by advances in translation and image recognition.
  • Microsoft hopes to dominate the business space.
  • Apple has the HomePod to be launched in December and is investing in emotion detecting technology
  • Amazon has Alexa which will on request provide access to goods and services with more to come.

And according to an article in the September 2017 edition of Wired, authored by Liat Clark, Amazon is the front-runner. Whereas Google can provide information, Amazon can bring you things! Google Home is the smart friend at a party whereas Alexa is a benign butler. According to Liat Clark …

Amazon wants to introduce Alexa into every area of your life: your home, car, hospital, workplace. The ‘everything’ store is about to be everywhere. Alexa has to be human like because it is essential that people trust her, enough to let visual and audio ‘surveillance’ into their homes ad lives. Alexa can try to empathise with words alone at the moment but when she has cameras at her disposal she will be able to respond to visual clues as well as aural input. And in response Alexa is becoming more human like. Alexa can whisper, pause, take a breath, adjust its pitch and allow for key words such as ‘ahem’ and ‘yay’ to be emphasised in more engaging ways. Forging an apparently ‘emotional’ response from Alexa is the goal. An AI will need to know a person well to engage in a relationship based on emotional response. Amazon may well know more about you than your closest friends and so, of course, will Alexa and be able to use both what you say and do to forge, maintain and extend that relationship. The insightful film Robot and Frank asked the question, “Can an AI be your friend?” Amazon has the answer, “Of course, if you trust the AI as you might another human.” And that is Amazon’s overriding intention – to get us to trust Alexa as we might a human friend in the knowledge that she is not in fact another human and hence will not pry into your life or betray you as a human friend might.

Of course Jeff Bezos (and the CEOs of other tech giants) are constructing cathedrals of capitalism where they intend consumers to come to worship and offer up as sacrifice their wages in return for the goods and services recommended and provided by AIs they trust. But here there is a supreme irony. The very same AIs that are the heart of this new faith are also being deployed to automate many of the functions the worker-worshippers utilise to earn the wages they need to live out their consumerist lives. AIs may be simultaneously the engine of capitalism and its doom. What are we to make of this conundrum? Surely it is worth discussing with the young people whose lives will be most affected by this impact of technology on society and society’s response. And where better to do this than in design & technology lessons.

As always comments welcome.

The Importance of Technological Perspective. Or; It’s no longer OK not to understand how the Internet works.

We’ve mentioned a few times, often in the context of our Disruptive Technologies work, how important we believe it is that a part of the work of D&T in schools should be to enable young people to gain ‘Technological Perspective’. David has described this as:

(that) which provides insight into ‘how technology works’, informing a constructively critical view of technology, avoiding alienation from our technologically-based society and enabling consideration of how technology might be used to provide products and systems that help create the sort of society in which young people wish to live.

Events following the awful attacks, first in Manchester and then in London last Saturday night, have brought home to me just how important this is, as these young people will be the future decision-makers and leaders of our society – and they simply must be equipped to do a better job than our current leaders.

I’m sure you’ll have seen that, in response to the attacks, there has (once again) been an attempt to blame the Internet and a call from Theresa May for the ending of ‘safe spaces’ for terrorists on the Internet. Given that this was a thrust of government policy before the attack, it’s hard not to see this as an opportunistic attempt to shore up that policy, but perhaps now is not the time for cynicism.

It is however the time for a clear-eyed analysis of what it would mean to end safe spaces on the internet. In case you are tempted to think that that sounds a pretty good idea, I offer you three articles that explain why it’s not just a very poor idea but in fact a rather meaningless idea – all written by people who are far more articulate on this than I can be.

The first, from The Guardian’s Charles Arthur, is ‘Blame the internet’ is just not a good enough response, Theresa May; at bottom Arthur’s argument is that banning technology is not a substitute for clear-headed policy and political action. He points out that, in the 1970’s, Northern Ireland’s terrorists got on just fine organising their plots using the ordinary telephone service (since neither mobile phones nor the internet were then available) and no-one was suggesting that in response all phone calls should be monitored. Presumably if that had happened they would simply have used other communication methods (dead-letter drops?).

Arthur notes the dystopian implications of the suggestion by John Mann (MP for Bassetlaw), who said: “I repeat, yet again, my call for the internet companies who terrorists have again used to communicate to be held legally liable for content.”, and says;

The authoritarian sweep of Mann’s idea is chilling: since legal liability is meant to deter, the companies would need people to monitor every word you wrote, every video you watched, and compare it against some manual of dissent. It’s like a playbook for the dystopia of Gilead, in The Handmaid’s Tale (which, weirdly enough, most resembles Islamic State’s framework for living).

I think the summary (but please read him for yourself) of what Arthur has to say is that:

  • Banning technologies will simply drive ‘bad actors’ to other communications means,
  • But will have highly negative effect on our own technological society,
  • Rather the focus should be on disabling the source of the ideas both internationally and at home. Arthur doesn’t say this but it seems important to note that after both the recent London and Manchester attacks it has emerged that the perpetrators had  (apparently fruitlessly) been earlier reported to the authorities for their worrying behaviour and views; such reports clearly need better responses and there needs to be supportive community work to encourage this kind of reporting by taking it seriously.

The second article, from MIT’s Technology Review, Theresa May Wants to End “Safe Spaces” for Terrorists on the Internet. What Does That Even Mean?, reinforces the third point above by noting the importance of personal contact in developing extremist ideas. This article also makes the point well that there are things that the big social networks can do and be supported in doing that fall short of asking them to monitor everything you say.

The third article is Theresa May wants to ban crypto: here’s what that would cost, and here’s why it won’t work anyway by Cory Doctorow. This more technical article explains why it is that banning ‘safe spaces’ fundamentally means undermining all internet cryptography, what the appalling costs of that would be and why it still wouldn’t stop terrorists anyway. I urge you to read the full argument, but this is the summary:

This, then, is what Theresa May is proposing:

  • All Britons’ communications must be easy for criminals, voyeurs and foreign spies to intercept
  • Any firms within reach of the UK government must be banned from producing secure software
  • All major code repositories, such as Github and Sourceforge, must be blocked
  • Search engines must not answer queries about web-pages that carry secure software
  • Virtually all academic security work in the UK must cease — security research must only take place in proprietary research environments where there is no onus to publish one’s findings, such as industry R&D and the security services
  • All packets in and out of the country, and within the country, must be subject to Chinese-style deep-packet inspection and any packets that appear to originate from secure software must be dropped
  • Existing walled gardens (like iOS and games consoles) must be ordered to ban their users from installing secure software
  • Anyone visiting the country from abroad must have their smartphones held at the border until they leave
  • Proprietary operating system vendors (Microsoft and Apple) must be ordered to redesign their operating systems as walled gardens that only allow users to run software from an app store, which will not sell or give secure software to Britons
  • Free/open source operating systems — that power the energy, banking, ecommerce, and infrastructure sectors — must be banned outright

That may sound a ridiculous set of things to conclude; just read the full article.

And then, please, find ways to discuss these things with the young people in your schools; make sure they, at least, do understand how the technologies around them, including the Internet, work. Having well-informed technological perspective really does matter.