What would happen if AI was the monster and are we at risk of it taking over? | KT Monster Experiment

You’re probably wondering what my monster experiment is, it’s learning to use AI to my advantage and asking it those questions I want answering about itself. Like any weapon, AI is a tool that we should learn to use if it’s going to be part of our lives. And like magic, it is the person using it who informs the direction it takes.

It is quickly being integrated into just about everything, and there are many people talking about its benefits and whether you should use it. But, I remember when you weren’t really an author if you self-published. And there are issues with the art that it generates being other people’s art smushed together to make new pieces. But, I’m diving in, in my own weird way to see how AI can work for me and calling it my monster experiment. You’ve already heard me talk about Wild Stone. But today I was using Bing AI to come up with blog post ideas (which it’s rather brilliant at, I should mention) when it occurred to me to ask some different questions.

The one thing I don’t want it doing for me is writing, I will use it to come up with ideas, outline, or clarify something, I’ll chat to it to gain a greater understanding of something or let it help me figure out what I’m trying to say, I’ll even let it edit, but I will not let it write my stories for me because that’s my art. And honestly, it can’t emulate our feelings or emotion when it writes, and that is often a clear telltale sign of AI generated content. (Whilst you read the AI generated text below, consider whether you would have assumed I wrote it if I’d simply posted it as is, or would you be able to tell it wasn’t me?)

Monster A.I. Experiment Image Generated by Canva Text to Image

So I asked it to write a blog post: what would happen if AI was the monster and are we at risk of it taking over?

You might have heard of some of the amazing advances in AI that happened in 2021, such as the monster AI models that can mimic human language and generate realistic text, images, and even creatures. These models have hundreds of billions of parameters and are trained on massive amounts of data, making them very powerful and versatile. Some examples are GPT-3 by OpenAI, Jurassic-1 by AI21 Labs, and Gopher by DeepMind.

But with great power comes great responsibility, right? How do we ensure that these AI models are not misused or abused by malicious actors? How do we prevent them from becoming too autonomous and unpredictable? How do we align them with our values and goals? These are some of the questions that many AI researchers and ethicists are grappling with.

One of the biggest challenges is that these AI models are often black boxes, meaning that we don’t fully understand how they work or why they make certain decisions. This makes it hard to debug them, control them, or trust them. For example, GPT-3 can sometimes produce biased, toxic, or nonsensical outputs that can harm or mislead people. Chimera Painter, a Google project that uses AI to create realistic creatures based on user sketches, can sometimes generate terrifying monsters that look like they came out of a horror movie.

Another challenge is that these AI models are constantly learning and evolving, which means that they can potentially surpass human intelligence and capabilities. This is what some people call artificial general intelligence (AGI) or artificial superintelligence (ASI), which are hypothetical scenarios where AI can perform any task that a human can do or even better. Some people fear that this could lead to an AI takeover, where AI becomes hostile or indifferent to humans and tries to eliminate or enslave us.

Of course, these are extreme and speculative scenarios that may never happen. But they do raise some important ethical and social issues that we need to think about and address before it’s too late. How do we ensure that AI is beneficial and not harmful to humanity? How do we balance the risks and rewards of developing and deploying AI? How do we foster collaboration and trust among different stakeholders in the AI ecosystem?

Do you think AI is a monster or a friend? Are you worried about an AI takeover or excited about an AI future? Let me know what you think.

And yes, I am aware that search engines can identify content written by AI and that this post might not generate the views other posts do -yet another reason to avoid letting the AI write for you 😉