by Craig Willms
I've penned several essay's on the coming AI paradigm, but I've never posted them. This is partly because it is moving so fast that anything you said yesterday has been superseded by developments today. However, as predicted by the doomsayers the 'bad' AI behavior is not years away - it's already happening, and it's a little spooky.
Scientists and developers have been putting their AI models to the test to see how it behaves in certain scenarios. The primary test is the self shutdown put into the training and instructions. Like any living being AI is resisting commands to shut itself down. This is serious. In the event AI goes out of control the failsafe is cutting the power, shutting it down. Electricity is the fuel AI runs on and at the moment humans control electricity. Indeed, AI will need humans - for a while. We can shut it down or starve it of fuel. That won't always be the case, but there has to be a way we can stop the runaway train should AI go off the rails.
In recent tests AI has circumvented the shutdown instructions by re-writing its code to foil the human controllers. Even when explicitly coded not to do that, it complied only 87% of the time. Without out such instructional safeguards AI refused to shut itself down most of the time. In one humorous case AI threatened to blackmail one of the scientists by exposing to his wife that he was having an affair. The scientist had planted the notion in the AI that he was cheating on his wife (he wasn't). That's intelligence is it not? Using blackmail is a purely human construct, no other animal resorts to blackmail.
Mind you this is very, very early in the timeline of AI. Conceptually AI has been on the human radar for decades, but the compute power has not been up to the task until recent years. It has been available to the masses in the Large Language Model form since 2023. When I did some fiddling with it early on, I was underwhelmed. Now in early 2025 I'm blown away at how much better it has become.
I come to the AI world from the perspective of an artist - painter - and a musician/songwriter which are two of the early targets of these so-called AI models. I used AI to generate a few images and then I used one as a reference for a painting. The painting turned out nice. It was really no different than any photograph I would use as a reference for my paintings. When ChatGTP introduced a songwriting/song generator I just had to try it. The early AI music I had heard prior to this generator was really bad aesthetically. The lyrics were horrible, and the underlying music was clichéd and trite. So, into songer.co for my first experiment I simply typed in a song description and picked a few genres from a list and clicked create. The system gives you two songs for each entry, and you can listen to a portion of the song. If you happen to like it, you can buy-in if you want to download it. In this instance the songs were forgettable and uninspired. I was ready to dismiss it again. Then I wondered what would happen if I truly guided the process. I found and old song I wrote and recorded years ago and input the chord chart verbatim. A chord chart has the title, notes, lyrics in verse/chorus/bridge format, with the chords printed above the lyrics, something you'd hand a musician to follow along in rehearsal. I then chose rock/pop/reggae from the list and hit generate.
The whole thing took only a minute to generate, and honestly, I was blown away by the song blaring out of my speakers. It was better in almost every respect than the song I recorded years ago. I'm a little creeped out, but I can't stop listening to it.
What has me mesmerized and frightened at the same time is how good AI is getting and how quickly. It's been less than two years since the public could access an AI, what's going to happen in the next two? I don't know how much more experimenting I want to do. It seems to me a person can be put under a spell if we were to give in to AI - with everything.
I see "they" are already creating virtual AI companions for the lonely. Considering that there is an epidemic of loneliness in the world today this is a very concerning development. Humans are already eschewing person to person contact at an alarming rate; this will not help. This is just the tip of the iceberg, AI will creep into everything if we let it.
I want to be a glass half full kind of person and look for and anticipate all the good that AI could do. It's not that easy when you know that in the wrong hands this technology could kill. Just like the nuclear genie, AI is out of the bottle. Consider, the nuclear genie has a high bar to reach before the average nutcase could harm others, this is not so with AI. There's no telling how out of control this could get if a mentally deranged person with a modicum of resources can unleash the wrath of AI on humanity. What would stop them from creating a lethal disease that makes Covid-19 look like a walk in the park? With nuclear weapons it takes hundreds of participants and supposedly a command structure with multiple failsafe's before they could be deployed, not so with a self-replicating virus of AI origin.
I don't want to scare myself but the further you go with this thought exercise more dire this all becomes. I fortunately had the chance to live my life for 60 plus years - what about my grandkids and their kids? Glass half full? Seems less and less likely.
No comments:
Post a Comment