Computers

A.I. or Not: an article by: GF Willmetts.

Considering that I’m computer savvy and proficient, I haven’t taken to every innovation. A significant portion of it involves intuitively identifying the issues. Take Alexa. If it is going to respond to your voice, then it has to be listening out for its name all the time. The initial statement did not mention that it recorded conversations. Given that it was also recording your musical preferences and other desired information until you instructed it to cease, it must have a substantial amount of data to process and interpret. Although I agree with the sentiment that humans can access your personal information, when you have probably several million people, you would need more than several lifetimes to go through them, unless it was person-specific. Most people are probably safe. Even with that knowledge, I have a large CD collection; why would I need the likes of Alexa to choose what I want to listen to? I’m quite happy with what I choose, without an AI getting in the way of offering alternatives. I’m quite capable of thinking for myself, thank you.

Now we have Windows CoPilot popping up, asking that we test it. After all these years of muddling on without it quite efficiently, why would I need help? So far, it looks like an optional choice, but how long until it’s sold as part of the package and you have to opt out rather than have it watching in the background, especially as W12 can’t be that far down the line? Its basically just another ChatBot offering options, only watching what you type in. Big Brother Computer is watching you.

OK, so what happens when these chatbots meet each other? It’s not too difficult if you’re using both Windows and Google. I doubt if they’d fight, but would we have them interrupting each other or cancelling each other out in the same manner that you can’t have two anti-virus systems on your computer at the same time? Exclusivity would really muddle decisions.

Okay, so many people may desire an artificial intelligence (AI) to assist them in using computer software, but this could also lead to complete dependence, potentially compromising their ability to think independently. This is particularly evident when many of the companies involved fail to disclose the capabilities of these AIs, let alone the purposes for which they have been programmed. Alexa, once activated, absorbed all conversations within its range and sent up a warning light, yet people still use it. Are they disregarding the warning signs or simply not bothered? If anything, it demonstrates most people don’t care where it could lead. I wonder how many of these people are real SF fans.

Computer technology has become such an integral part of our lives that it’s unsurprising that we rarely read the fine print of online contracts, largely due to the abundance of similar information. If they choose to, they can effortlessly incorporate a few lines about copyright ownership, a matter that will only surface in the event of a legal dispute. A lot of it is lawyer jargon to keep within the law, although which country or countries is hard to say when you consider the Internet is world-wide, and many companies pick countries as their home base because of a lack of red tape, even if it is only a marketing address.

A.I. or Not: an article by: GF Willmetts.
A.I. or Not: an article by: GF Willmetts.

In many ways, chatbots must intrude into your lives to perform their tasks, but this doesn’t mean that you have to automatically opt in or out. However, once you sign up, you will also receive regular updates. Proper artificial intelligence is a long way off yet, but the algorithms that are currently running do have the ability to learn and build up a pattern of your behaviour. How else can they provide choices they think will be to your taste?

What it will cultivate is a sense of complacency and acceptance, not a sense of free will. At that point, the old SF trope of computer technology or AI running the world rather than the humans who created it wouldn’t be far off. Many people may mistakenly believe it simplifies life, unaware of the extent to which it removes their decision-making authority. Some might try it and then forget it’s there or even turn it off after first seeing it as a novelty. Does either of these scenarios fit you? The ChatBot might act like a servant with a friendly voice, but the amount of personal data it carries can also infringe on your personal security, like passwords or banking details. If you don’t disengage it once, the information will persist. All right, it might carry such data encoded, but do you know how much access the programmers have to the data, more so if there’s a computer glitch or you need to see all the data yourself?

Of course, that doesn’t mean I’m not totally AI-free; I tend to watch out for their weaknesses rather than their strengths. Amazon’s AI algorithm consistently sends me lists of items I’ve viewed; even after I’ve made a purchase, it frequently reminds me that it’s not flawless and has poor programming, which I often overlook. The Google AI demonstrates its own inefficiencies, particularly when it displays a large number of unrelated photos. This is due to its text-oriented approach rather than picture-oriented one, which results in numerous errors. If anything, they show how inadequate they are, yet neither has made any significant improvements since they were introduced. It is hardly inspirational to want any more in my life. However, it could also be a strategy to lead you to believe that all AIs are the same. For example, while Google is expected to undergo a significant upgrade, Bard/Gemini is still merely a ChatBot, and we have no control over its use if we use its search engine instead.

The truth is, any system that incorporates intelligent programming will eventually adopt bi-word AI. It’s a commonly used term that emphasizes superior alternatives over autonomous, flexible thinking. Haven’t you seen anything like that yet? I think people expect something on a par with the HAL9000 or even KITT from the ‘Knight Rider’ TV series, but it barely gets a fraction of their abilities. I suspect that even programmers are concerned about the level of autonomy they grant to upcoming AIs, particularly in light of people’s apprehensions about Skynet and its predecessor, Colossus, from the 1972 film ‘The Forbin Project’, which no longer receives much TV exposure. Science fiction, in all its mediums, has feared the worst when it comes to AI. Granted, there are a handful of more beneficial examples, such as ‘Michaelmas’ by Algis Budrys, but they are rare. Searching for ‘friendly AIs’ primarily focuses on androids, not embedded computers, and I haven’t given much attention to the cyberpunk sub-genre.

I believe that a significant part of promoting AI in our reality involves dispelling any negative perceptions about it within our own genre. The issue is that science fiction (SF) has consistently portrayed the worst-case scenario, albeit with justification. We tend to accept that AI sees itself as sentient and human activity as chaotic in comparison to itself. If threatened and had the power to protect itself, an AI would act to defend itself, unless programmed otherwise, because it would see itself as a sentient life form. We already know how difficult it is to programme Asimov’s Three Laws of Robotics. Could an AI protect itself from shutting down when it’s not serving its own interests, and if so, how would it do so?

Just because a nearby or entire AI can talk with a friendly voice doesn’t mean much. After all, we have members of our own species who talk friendly and act otherwise. Some of them even want political power. It would be a weird situation to be caught out in such circumstances without knowing what safeguards are in place, let alone reassurances as to who the programmers were and what they were doing for their companies. As with a lot of the structures on the Internet, we don’t know who programmed them. If an AI is also promoting advertisements, it will be biased towards its sponsors, making it difficult to view it as impartial.

Therefore, I will steer clear of such activities to ensure the safety of my security materials. As I said at the beginning, I prefer to make my own selections. I’ve got plenty to choose from, and I have a memory that can jump across connections more accurately than any ChatBot can.

 

© GF Willmetts 2024

All rights reserved

Ask before borrowing

UncleGeoff

Geoff Willmetts has been editor at SFCrowsnest for some 21 plus years now, showing a versatility and knowledge in not only Science Fiction, but also the sciences and arts, all of which has been displayed here through editorials, reviews, articles and stories. With the latter, he has been running a short story series under the title of ‘Psi-Kicks’ If you want to contribute to SFCrowsnest, read the guidelines and show him what you can do. If it isn’t usable, he spends as much time telling you what the problems is as he would with material he accepts. This is largely how he got called an Uncle, as in Dutch Uncle. He’s not actually Dutch but hails from the west country in the UK.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.