CultureScifi

Editorial – Jan 2015: Safeguarding Artificial Intelligence (by Geoff Willmetts).

Hello everyone

One of the biggest tools in any SF writer’s toolbox is to be cynical and question any and everything. It’s hardly that surprising that there is a large proportion of SF novels that look at a gloomy end for mankind rather than an optimistic future. In some respects, the latter choice sells better. It’s the same kind of fear option that draws people to the horror genre, although less optimism. You would have thought a bright future would be more attractive from a gloomy present but seeing something worse tends to make the mind think things could be a lot worse. The mind can play funny tricks like that. As writers of SF, it can give the opportunity to resolve problems that can’t be done in our own reality. With our reality looking like an SF reality, we have the advantage of working out the pitfalls into our solutions.

Indeed, where so much was seen in Science Fiction before, we have yet to succumb to any of the problems foretold. In fact, in some of them, we’ve fared even better. No one is repulsed by the idea of cyborgs, mostly because we’ve seen the benefits in them from pacemakers to limb replacements, which are getting ever better and cheaper with 3D printers. Certain not in the league of ‘The Six Million Dollar Man’ or ‘The Bionic Woman’ yet and I doubt if anything like a smaller nuclear power source would ever be allowed, well, unless they use thorium, so I doubt if we’ll see anyone demonstrating super-human abilities although those spring-loaded legs are pretty close.

Wild army robot... WildCat.
Safeguarding Artificial Intelligence.

With films from ‘Metropolis’ onward, people got rid of their fear of robots, mostly because their fears were played out in books and screen. People learnt that robots had their limitations. No mad attacks. Only doing what they were programmed to do. Some people are spooked when robots constructed to resemble humans but I suspect the faces don’t yet have the imperfections of movement that we see and take for granted in normal faces. No doubt this will one day be sorted out but it’s more a mechanical problem than anything.

As I said at the beginning, running through templates of disaster tends to make sure we’re better prepared than we would be normally be. Even if those involved have never read or watched much Science Fiction, its influence can be felt by everyone. That being the case, one has to wonder at Professor Stephen Hawkings’ assertion that any Artificial Intelligence would endanger mankind. Are we really going to be that careless with any AI we’re likely to create and create one that sees mankind as dangerous? After all, its first priority to be drummed into it would be to preserve life and assist mankind which is basically Asimov’s first two laws of robotics. I suspect self-preservation would become the fourth law and the real third law that of self-sacrifice to save others.

Like with cyborgs and some current software that can actually pass the Turing Test and appear like human, we are accepting the idea of Artificial Intelligence with less dread. I suspect if one appeared on the social media websites, even without revealing what it was, it would get a lot of ‘friends’. Of course, one has to assume that most ‘friends’ out there are real in the first place. Sometimes, it can be hard to tell the difference. Social acceptance is the key to communication after all which would be any AI’s primary task followed by providing information.

At most, an AI will only be as smart as the people who created it. Granted that it will be combined with encyclopaedic knowledge and computer-based mathematical sub-routines that would enable it to have a faster solution time but none of it will make it smarter, just quicker with the answers or choice of answers to pick from. Indeed, any AI will lack human herd instinct and a lot of the emotional hang-ups we associate with humans. It is more likely to be less like us simply because it will lack our failings.

Will it make us suspicious in the way most humans are of anything different? The AI won’t because it won’t be like us. Likewise, I doubt if it will fear anything but the on/off button. More likely, it will be more afraid of not pleasing us than we are of it doing something drastic to our lives. After all, he or she who has control of the power switch or switches – who in their right mind is going to only have just one? – controls its life. Indeed, there are so many ways to control an AI, including limiting its hard drive capacity, let alone deleting it in part or as a whole. Being confined to one computer system would be a matter of form and I doubt if it would be given anything but controlled access to additional information than direct connection to the Internet.

I doubt if any AI would shake in fear, if that is possible, at its human masters power over it or demands but it will understand the concept of sleep mode and even its termination in its protocols. Whether it believes in them being a logical course of action or would do anything to prevent them is something that would have to be put to the test and I suspect such scenarios would be examined. Such group assessments, a’la ‘2001’, where HAL questions Dave Bowman but also itself being examined would be something that would be carried out regularly. If the AI is to be preserved and it has done no harm, then the same tests would be carried out on the humans who have access to it so as give it some level of being an equal. Like with ‘2010’, I doubt if any intelligence officers or of any dubious organisation would be allowed near it to give conflicting orders without recourse.

Despite the fantasies of some SF film scriptwriters or even SF writers, it’s impossible for AI to evolve on its own. The programmers among you know that without specific programs, there is nothing any computer based software can do independently without being tested. Even building an AI, it is unlikely to be small let alone portable. As an operating system, it is likely to need a lot of empty space in reserve for storing memory file as it learns and manipulates data. It can hardly conceal its files or not let its programmers see what it is doing then or later.

It’s also unlikely that an AI would be built by a single person but by a team of people, as indeed recent work has shown. However, when you consider the mistakes in some commercial software let alone accidents in coding, you have to wonder what will happen to an AI under similar conditions. Should we even consider upgrades and versions which could compound or hide mistakes that would make it worse? Certainly the AI should be allowed to assess and query anything that would damage itself as that would then be in part of its own laws not to let anything be later programmed in that would damage it. After all, the entire point of an AI is to have an independent intellect and make most decisions on its own. Whether we would include a self-repair routine to sort out its own bad code would no doubt have to be debated but it would be better than any human made errors.

‘Improvements’ that aren’t falls under the same category as deliberate tampering and would make the rest of the team of programmers that look after this rare AI that works so well very cautious. It would be better if the AI announces this loudly than take the risk or being made worse by such actions. After all, any AI is not the property of one individual or company when it has to preserve its own life.

If anything, it’s more likely that more will go wrong than right in creating an AI. Unless perfect programming is ever achieved then it’s unlikely that we’ll ever see anything becoming operational. At most, an AI will be as smart as us but never smarter. Without arms or legs, it would never be mobile in the physical sense that we see ourselves. Of course, there are possibilities of connecting an AI to a robot body but I doubt if that would ever be done as a first option, let alone have the physical size to be functional.

As it is unlikely that there will be any time machines, I doubt if there will be any androids sent back to remove any obstacles to an AI’s own creation. Likewise, I doubt if any AI would be made responsible for the running of a country’s security systems. After all, the first target any hacker will see is the AI not anything beyond that as a target. No doubt, we would probably instruct an AI to track such hackers down or at least report where they came from. Even if there were two or more AIs out there, I doubt if any would want them to meet unconditionally for fear of what each would learn from the other. That being said, any AI will be instructed not to allow anything to tamper with it and keep such boogiemen out or at least alert its programmers to what and who. After all, Mankind is too suspicious of anything not us to do anything like that. The imagery from Science Fiction and the dangerous possibilities will have ensured that is one route we won’t take. Haven’t we?

 

Thank you, take care, good night and there is nothing wrong with your programming.

Geoff Willmetts

editor: SFCrowsnest.org.uk

 

Observation: Considering the Ice Warriors reptilian nature, I do wonder if they were an off-shoot of the Silurians who migrated to Mars.

 

A Zen thought: Valid criticism is far stronger than whoever wrote it.

 

Memory: Recently, I came across a technique for increasing PC memory in Windows 7 by using an empty USB 16gb memory chip (you can use larger ones although I doubt you’d need it. My windows gadget says it pulls 4gB for regular usage. Ensure everything is taken off it, including anything that relates to directories that came with it. I found it easier to just download these on to the hard drive as a back-up. Since doing the latter, I haven’t had any hiccups.

Under its ‘Properties’ select ‘ReadyBoost’. Don’t try non-FAT format because it doesn’t show ReadyBoost but ensure its format is FAT. Select third choice for ‘Select This Device and then second dedicated second choice and don’t save files to that drive. My 7 year-old laptop has 4gB RAM and 2gB on its video card which was all that was being used conventionally back then. It runs most things but always tended to struggle booting up Paintshop Pro but not now. I’ve also found Fry Cry (all right I’m still on the first game) also boots up faster. Windows has a habit of using available memory anyway it can and a further 16gB is always handy.

If any of that sounds familiar, this is a cheap way to speed things up a bit on your computer.

Be aware though that occasionally boot-ups and even waking the computer from sleep mode might take a little longer. This looks like Windows is assessing its extra memory space and what to do with it.

A couple times, this took too long, necessitating a full reboot but that’s when I realised the USB drive had to be completely empty. In case you don’t know how to do that, just keep your finger pressed down on the start button until it happens. You’ll then be given two choices. I found going to the previous ‘restore point’ the better option as just booting up gave the same problem. Always save all material before sleep mode anyway. If this is the only flaw, then it’s something I can live with.

 

 

UncleGeoff

Geoff Willmetts has been editor at SFCrowsnest for some 21 plus years now, showing a versatility and knowledge in not only Science Fiction, but also the sciences and arts, all of which has been displayed here through editorials, reviews, articles and stories. With the latter, he has been running a short story series under the title of ‘Psi-Kicks’ If you want to contribute to SFCrowsnest, read the guidelines and show him what you can do. If it isn’t usable, he spends as much time telling you what the problems is as he would with material he accepts. This is largely how he got called an Uncle, as in Dutch Uncle. He’s not actually Dutch but hails from the west country in the UK.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.