fbpx
Science

To AI Or Not AI: The Computer Doesn’t Have All The Answers (article).

To AI Or Not AI:

The Computer Doesn’t Have All The Answers

a discussion point by: GF Willmetts

How does one define Artificial Intelligence? Is it purely man-made or can machines become self-aware on their own or a bit of each compositing and complementing together? After all, self-awareness is really what AI is all about. Intelligence is about the awareness of self, powered by memories and experience to keep existing that growing knowledge and data. The computer you use carries your memories as datafiles but it has little chance of becoming aware and thinking that those are its own memories or knowledge and can use them independently of yourself, simply because it isn’t designed that way nor does it have the software to do so, although it would probably make an interesting story if it could. Would it think it was you? If anything, that is a demonstration that an AI isn’t simply the sum of its parts but needs something there to bring it all together which it can’t do itself. Even if a computer program had some AI attributes, it couldn’t finish the job and suddenly become self-aware and becomes a problem of what do you program in for that to happen?

The earliest stories of robots becoming sentient were written long before computers existed and indeed with 1903’s ‘A Round Trip To The Year 2000’ by William W. Cook and 1920s play/story ‘R.U.R.’ by Karl Capek, these were more android than robot, as they appear human-looking than robots. Both authors were writing metaphor about slavery of all kinds than the real needs of a rise of a separate mechanised species or given much thought into how it was done. Even Asimov in the 1950s made assumptions with his positronic robots, having them start off hardwired programmed in their tasks once they were put together. Without giving away too many details in their stories, SF writers from that period didn’t reveal much about building robots, androids or AIs, making it look very simple to do, although the reality is still very different and difficult. After all, SF writers, even those who are scientists, were never computer experts. Only now are they getting together in making humanoid robots and the knowledge from these is enabling better limb replacement for humans that is equating more to cybernetics and bionics is there the realisation that we aren’t too far off from creating a humanoid robotic body but an even lengthier time from creating an AI brain to control it.

With what we know about computers today, if you want an Artificial Intelligence, then you have to write a program, setting up perimeters for awareness and interpretation as well as analysis and has proven to be a real stumbling block. The senses are now coming into their own as we can provide mechanised senses of vision and hearing, although quite what an AI would make of it, no one knows yet. It might know what something is but there would be no emotion of what it feels like to touch something other than a stream of measurement data. Some software might be able to distinguish between a wall and a door but it can’t decide what an obstacle is for itself yet which would be an advance towards AI. As far as I know, no ‘intelligent’ software has picked up an object or pointed at something and asked, ‘What is this?’ or make its own judgement call rather than consult its stored knowledge. Having an AI give its own name to something would be the first step into babyhood. Having viable senses has made progression in creating humanoid robots possible but at the end of the day, they are still essentially computers on wheels or legs doing what it’s been programmed to do, so let’s just stay with the computer brain end for now.

Could a computer become self-aware without the necessary programming? No. Well, at least not with silicon chip hardware although one has to wonder about the storage facilities of solid state which takes less space. Conventional programming isn’t that flexible for change or big enough to store and quickly access all the knowledge it needs in making a simple independent decision. It might work a lot faster but I doubt if it would have instant decisions when there are so many perimeters to choose between.

I’m going to run through a procedure and not get too technical. A program could identify a fruit and even its function but would it be able to identify whether it’s fit to eat, going or gone bad? There isn’t the flexibility to know or test this in a general fashion, let alone know whether a particular human would have, say, an allergy towards this particular fruit. A program such as this could be created but for it to have independence and add new entries without human intervention would be the province of an AI analysis and some ability to interact with the physical world.

A protein-based hardware with a multitude of pathways would have a better capacity for cross-connecting quickly like an organic brain but without programmed software or some hardwired perimeters, less likely to succeed. It could still fail the above fruit test if it didn’t have sufficient knowledge as to allergies or whether it’s poisonous or the means for chemical analysis. It all depends on the programming and the facility to understand and learn, self-awareness shouldn’t even be classed as Science Fiction without it.

I’m going to avoid the Turing Test, mostly because a measure of knowledge and conversation on some levels can be achieved by non-AIs on a limited scale currently and already been discussed elsewhere. Besides, I have other ways to determine some things that would draw closer similarities between organic and non-organic life-forms and the problems associated with them, not to mention certain things that need to be in the primary program functions. The fruit test program could be made to work without AI, providing the software and testing equipment was carrying enough data to identify the fruit, freshness and toxicity. Considering the number of fruits available, for compactness of programming, a protein-based AI would be better than a silicon-based AI because it would also take up less space and presumable less power to run.

Can you program self-awareness? That’s a difficult question. A computer would first have to be aware that a cessation of electricity to its circuits would mean death or deep sleep until it was powered up again. If it could realise this without being told, then I think that would qualify as being self-aware of its own life but I haven’t seen any journal with scientists proclaiming that yet. If such awareness was given, the real test for it would be to give the potential AI an option to keep its power source on or have enough battery reserve to over-ride the human hand, that would also be a clear test of what it needs for its own self-survival. After all, that too would be a test for self-awareness, personal survival and self-choice although such an action would have to be programmed in and see if the AI can pick the right decision.

A human, other species are available, under similar conditions would choose self-survival unless self-sacrifice was incurred to enable someone else’s more vital survival. Self-awareness is therefore a vital component to any sentient life, including Artificial Intelligence. It is also the basis of the first and third laws of robotics as defined by Isaac Asimov. An AI, by definition, would need self-determination and not just be turned off at the end of the day or at the end of its usefulness to know it would and could die until turned on again. Its creators would also have to see it as a living thing than just another computer program. Unlike organic life, there would be no limitation like sleep to spend a third of its lifetime.

All of this places the AI into Science Fiction scenario territory where there are always concerns as to just how far an Artificial Intelligence would go to preserve its own life and the ability to kill if possible to protect itself. How can we determine an AI’s respect for other forms of life if we can callously just turn its own off with a switch? If we teach this lack of respect, then how can we expect an AI not to act like its makers when put in a similar situation? How would an AI know that humans can’t be turned on and off with a button when that is the form of its own existence?

If it observes us, then we might well have to explain to any AI why some humans deserve to die when others do not. Even our own moral compass can look strange when it’s even given political persuasion. We would have to seriously assess an AI as to what decision it would make against a proven or an alleged murderer before looking at, say, a dictator who doesn’t kill but orders others to do it for him. Get that wrong and the likes of Colossus from ‘The Forbin Project’ or Skynet from the ‘Terminator’ films could become a distinct possibility, not through malice but lack of information to make a better judgement and want a peaceful existence. If it has the same violence streak as ourselves then these two AIs would look tame in comparison.

Colossus-ForbinProject

I even put that ahead of continuation of the species. As a computer program, as long as there is available hardware, all an AI has to do is copy its software into another machine. It wouldn’t need to develop a better version than itself, just ensure there was an emergency back-up. Based off of that, you would only ever have to make one AI correctly and depending on the size of the program, it wouldn’t be long before everyone had a copy. If they all have access to a core knowledge base, they would all be the same AI, altered only be the different amount of information accessible. If an AI knows this, then once a few duplicates were made, just back-ups even left in sleep mode, continuation from one individual to many would be seen as a trivial matter and probably not worth pursuing and not the lengthy process that a species such as humans spend many years over. This does, of course, depend on how big the original AI program was. The core program might actually be quite small in comparison to the information database it depends on to carry its memories. As long as the latter was available, portable aspects of the AI would still be a possibility but only in the interest of survival in different environments.

Also, unlike humans, as a computer program, an AI would not have to worry about filling physical space digitally, it has plenty of room in cyberspace, providing it was allowed access to such as the Internet, although I suspect its first year would be spent sorting out true and false data as there’s so much out there than trust whatever it reads, assuming it could tell the difference. As nothing is ever turned totally off on the Net, the AI could happily co-exist with the information flow and might not even consider making endless duplicates of itself. Although scientists are reluctant to release an AI on the Net if they created one, providing it is deemed benevolent, releasing it there might actually be the safest place to put it for continued existence and independence.

This, too, has a similar problem. One only has to look at how computer viruses self-replicate to realise that this could be applied to any potential AI to realise that it is reproduction is something easily programmed into its instructions. That being so, this option could also be removed simply by not allowing the potential AI this option or rather program a line in that two co-existing similar AIs existing in the same memory space to either annihilate each other or turn one-off. In many respects, any fear of replicating AIs would simply not exist. An AI isn’t likely to have paternal or maternal feelings simply because there is no reference point to it having them for itself. A determination for its own individual uniqueness it would be enough to prevent duplication. One would also hope that it would be sufficiently aware to delete any computer viruses it encounters.

Is there anything that would make an AI possible? The ability to create its own programming would differentiate it from a standard program. Creating a rule structure, say with understanding the intricacies and contradictions in the English language which it could apply to audio or typed input without human intervention, would be considered a sign of internal growth. There is a ‘however’, even with this. All you would need is a set down line decision connected to a database where particular pointers from a database is connected and it becomes a simple option line and processing would be quickly done to give the right response. As this option would also fall into the Turing Test area, you wouldn’t need to be an Artificial Intelligence, only appear to be one. The AI really needs to be the instigator of questions as it builds an understanding than an answering talk-box.

All this information so far tends to suggest that an AI can be faked to appear to be independent but really is just clever programming that even a moderate programmer could put together.

Maybe then perhaps we ought to look at what defines human intelligence that would separate us from an AI and see if that gives any clue. From baby up, we are the sum of our experiences. We quickly learn the difference between hot and cold and the possible damage by prolonged exposure to both. From such lessons, memories are set up that serve us in self-protection and warnings to those learning of such dangers after all, self-preservation is a survival aspect. As a child grows, such responses become ever more complex but are all part of the inner workings of the personality yet even these things could be programmed. For a human, such knowledge is always growing from the experience. Somewhere in all this, develops a priority set of knowledge as important to know and serve different purposes and is probably what differentiates those of us who are really geeky from the rest of the population. Although this could be seen as open-ended programming, it wouldn’t be difficult to include in an AI persona so it has a name that it can respond to. Creating a baby-like AI would surely be seen as having the most potential to develop and hopefully we can keep up with its learning curve into adulthood.

From this implication that if there is a resemblance to us, the human mind seems to be a series of self-programmed responses and reflexes. They can be specialised like being a good or poor typist but that’s a reliance on co-ordination skills. If you see the keyboard as something where you have to look to find the right letters, you’re basically uncoordinated. If you grasp the keyboard as a letter spread that each hand’s fingers can touch the right letters and certain movements create the right words, you can soon become a fast typist and demonstrate a level of co-ordination. The same applies to any individual skill and how co-ordinated you are compared to others with a similar skill. Even artistic skills can be seen in that light as the talent is moderated by good eye to hand co-ordination and a recognition of structure and colour. If an AI is to develop skills beyond being a bank of knowledge then it would also need the ability to absorb and develop its own cognitive skills. This would have to be far better than ‘Star Trek: The Next Generation’s android Commander Data who only emulates artistic skills by patching together a patchwork of human artists’ paintings that it has observed.

What of emotions? Are they control responses? In view of a response to a hot or cold object, then yes. Any animal avoids pain when it can and certainly revels in pleasure. From such, more complex responses between love and hate evolve. It would be here that an AI would have serious problems because although some level of taste could be programmed, much of our emotional responses are based on how we react to things, both being prejudiced how we like things we encounter. An AI would have to react conversely to an opposing AI or contradictory humans and be able to tell the difference. There would still have to be some thought given as to why it would want to care about anything other than self-survival? Things like heat and cold might be beyond programming unless housed in a robot body where it can gauge temperature differences as which could impair its function. Whether an AI would develop a love or hate choice would depend on what it would like or detest. It would certainly not like the person who could turn it off.

Although this is seen on a simple level, it gets more complex as experiences build up. It’s amazing how most humans keep this perspective up but most of it becomes non-verbal. That is, a lot of what you do and respond isn’t even thought about it. I doubt if you think much before you verbalise a response although it’s probably done a split-second before you talk. Unless you’re left to yourself, I doubt if you do much non-verbal thought before responding. That also includes this writer. For an AI, nothing will ever be non-verbal, it will always have information to fall back on and rationalise or even readable by its creators.

This article came about because of something I said when e-talking to one of my team about what makes an Artificial Intelligence. From there, it’s me addressing each problem and my fingers translating the problem and measuring the good and bad points here. The rest is all self-training and I’m typing as I think through the problem and edit as I polish this article into shape. You’d be hard pressed to tell the difference in the process of raw to finished draft. Although an AI would appear to work the same way, it would actually be doing the two things separately and deciding which information needs to be conferred to anyone else present. If anything, an AI would be regarded as too honest because it wouldn’t understand the necessity of the ‘white lie’ to avoid hurting someone’s feelings or how easy it is to move into deceitful lies.

Something we wouldn’t expect an AI to make is mistakes. Indeed, our Science Fiction genre uses stories where AIs become dysfunctional because of unintentional mistakes but in reality, I doubt if that would happen as the AI would be caught in a logic loop which would mean no action at all. In humans, this would be called indecisiveness. However, what is a mistake? Most of the errors we make are when we are out of our depth and have no experience of but it all part of the learning process. AIs are not likely to be omnipotent so it should be expected that they can make mistakes and, like us, learn from them. It’s a shame SF writers haven’t considered this aspect seriously. In an AI’s early life, you would therefore need it to pass its decisions to its creator before being allowed to act on them. An AI, such as the HAL 9000 from ‘2001: A Space Odyssey’, insisting it would never make a mistake would be suspect because how could it lead a perfect life when it interacts with humans who make mistakes all the time?

In this respect, there is little difference between the average human and an AI, other than the speed of response. Humans are cheaper to make. We want an Artificial Intelligence to be something we create but to be like us rather than be something alien. If that happens, we are far more likely to see it as being akin and we can understand than a potential monster waiting to take us down. The one thing an AI would be most able to do is be our ambassador for space travel. It won’t get bored and have near immortality, making it ideal for long voyages. To be unlike us would mean an AI of a different variety. If an AI was just purely artificial then we might as well give up trying to create an artificial life-form and work out how to transfer a human personality into digital form and that is actually a lot harder to do. As long term space travel has shown itself to be impractical at this time, creating an Artificial Intelligence for some activity would be economically cheaper and immensely more practical but has still got some way to go before becoming practical.

 

© GF Willmetts 2013

All rights reserved

Ask before borrowing

UncleGeoff

Geoff Willmetts has been editor at SFCrowsnest for some 21 plus years now, showing a versatility and knowledge in not only Science Fiction, but also the sciences and arts, all of which has been displayed here through editorials, reviews, articles and stories. With the latter, he has been running a short story series under the title of ‘Psi-Kicks’ If you want to contribute to SFCrowsnest, read the guidelines and show him what you can do. If it isn’t usable, he spends as much time telling you what the problems is as he would with material he accepts. This is largely how he got called an Uncle, as in Dutch Uncle. He’s not actually Dutch but hails from the west country in the UK.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.