If you have not already, please check out last week’s article about “The dangers of downloading files“.

Recently A. It has become popular and accessible to everyday folks. Before, you had to have a deep understanding of math and know how to program. And even have a large amount of commuting power. With Chat GPT, this changes everything!

Futuristic humanoid robot programming on a computer, symbolizing artificial intelligence, machine learning, and advanced automation technology.

Anyone can access and use A.I I have just a computer connection. More and more products are being built with A. I integrated into the products. At my workplace, we have or had a strict ‘no AI’ policy. I rule. So I would get tickets asking to use a site like Grammarly, but I have to deny the requests because of the rules in place.

There are many different reasons why A.I is dangerous. A lot of them are privacy concerns. On the internet, there are files called “robots.txt” that sites have. These text files tell honest web crawlers what part of the site they can access. There is no enforcement of these rules; anyone could program a crawler that does not follow the site’s Robots.txt file. Anyways, the creators of ChatGPT hired a third-party company to scrape the whole internet.

Which, by itself, is impressive, but they scraped the internet while not following the site’s robots.txt rules. ChatGPT hired a third party so they could keep their hands “clean”. Anything on the web was scraped and used to train the A.I.

They most likely still scrape the internet and retrain the A.I. I because the internet is always changing, new sites are added, and sites are taken down. This becomes a concern for privacy and for the accuracy of the data that is fed into ChatGPT. Anyone can post pretty much anything they want on the internet. ChatGPT may not know the difference between satire and fake information, and then display the information to the end user as a fact.

Using A.I at Work

Focused businesswoman analyzing futuristic digital data interface with holographic charts and global network visuals, representing artificial intelligence and data analytics technology.

A.I. can be used for good, like correcting grammar or feeding a large document into a prompt for the A.I to summarize the data. The only problem with that is if you are writing or need to review a document with sensitive content, now the AI knows that information, and it’s possible that they might log the data, or a malicious A.I prompt is set up that purposely saves the whole document after you submit it. There would be no way for you to know if the A.I company or program saved a local copy of the document before running it through the A.I.

The A.I platform might be honest, but it may use the data you input to train the AI, which could lead to information disclosure of sensitive data. Most A.I like ChatGPT have guard rails in place to prevent giving users harmful information, but of course, there a methods that hackers could use to bypass these guard rails, which could lead to harmful information or them extracting data from the A.I.

A.I is now used in most search engines, and the first results are usually a summary of what you searched. But be careful, these could be malicious URLS designed to phish you. Phishing is when a site mimics or clones a certain site that tries to trick you into entering your information into the fake site, which uses the information gathered to hack your accounts.

Crime and Illegal Porn Use

Red digital robot head communicating in binary code on a blue background, symbolizing artificial intelligence and data processing.

There have been many cases of users abusing AI to generate deep fake porn of other people. Not only can they make realistic videos or pictures. There have been a few cases of people using AI to generate CSAM, which is short for Child Sexual Abuse Material. With only an image, sick people can generate porn of people under the age of 18  or even generate porn of anyone without their consent.

Most A.I platforms have some type of defense against making deep fake porn, but the people making it are not using the A.I that normal people are using. They are often using A.I built by a stranger on the internet that is specifically created for criminal uses. Not only can they generate porn, but they are also being used to generate phishing emails, used to generate malware, Cryptocurrency scam websites, phishing websites, fake news content, fake video or audio, and many more.

When creating fake audio snippets of someone, they can be used to trick loved ones into sending money to hackers or to deceive an executive into transferring funds to them. When or if you receive a call from a loved one asking for money, always get in the habit of verifying who you are talking to. Scammers are able to spoof the caller ID.  If you receive a call, hang up on the person and call them back using the contact saved on your phone, and ask if they need money. If they have no idea what you are talking about, GREAT, you just saved yourself from being scammed.

Here are some more ways that threat actors are using A.I for malicious purposes.

Real People Losing Jobs

Silhouette of a man walking away from a door labeled “Jobs Closed,” representing unemployment or job shortage.

A.I could take away real people’s jobs. A study in 2024 found that 60% of administration tasks could be automated. I would suggest that a company or employer still have a human look at the task to make sure it is done correctly, rather than trusting AI to do it right every time.

I think AI will definitely take some jobs from artists. A.I is cheaper and easier to use than sitting with an artist to design a logo. The only problem with using AI for ART is that there is a chance that the A.I will steal someone else’s art and use aspects of it in the art it creates. This could lead to issues like copyright infringement and being sued in court.

While A.I can generate code successfully, it does not mean that it did not created any insecure code. A.I is not 100% accurate when coding and should still require a human to review any code before being put in production. A.I-generated code might not only create insecure code but could also hallucinate and import or use modules or gems that are not real. Threat actors can abuse the hallucination and generate a module or gem with the hallucinated name, which is actually malicious.

Privacy Concerns

Close-up of a computer keyboard with a blue key labeled “Privacy” featuring a padlock icon, symbolizing data protection and online security.

It is no secret that streaming sites like Netflix, Hulu, etc, are using A.I for recommendations to show based on your past viewing history. Stores like Target are also using A.I to analyze your past shipping history to predict what other items would go with a item you buy frequently. This might seem harmless, but what if they could use your habits to make more money from you or detect that you are a smoker, a drinker, or a vegan? Not only could they abuse this information for their own gains. They could sell the data of their customers to other companies or the government.

In the year 2012, Target used A.I on customers to detect behaviors. They found out that pregnant women sometimes buy “unscented lotion around the beginning of their pregnancy “. After researching purchases made by customers, they found that pregnant women buy certain products.

They found that about “20” different items MIGHT indicate that the customer is pregnant. Well, a teen bought some of the 20 items and matched the description of being pregnant. Target started sending coupons to the teen for baby diapers, baby clothes, and cribs. What was crazy was that Target was correct; the teen was pregnant.

Also, uploading an image of yourself to generate A.I. Art might seem harmless, but the site could be using the images you uploaded to generate a database of citizens that could be sold to other companies or governments that use the dataset to detect and track people on CCTV. Not only could they be sold, but the company or site you uploaded your images to might even have bad security and get hacked or accidentally expose their database on the internet.

In February 2020, a facial recognition company was hacked. The company scraped social media and had over a billion images. Luckily, as far as we know, the billions of images it scraped were not involved in the breach.

The hackers were able to get client lists that included banking and police forces.

Conclusion

A.I is not inherently dangerous per se, but the data imported could be abused. Sensitive information could be exposed accidentally if someone were to ask the AI to summarize an email that contained sensitive information. The AI could either save the information or use it to train the AI agent, which might disclose the information to another user.  

You do not know what is going on behind the scenes. Like anything, A.I can be used for good and bad. It can help you at work be more productive, and it can also be used to track your habits, track your actions, and be used for surveillance. Especially if you live in a region that is a surveillance state, like North Korea or China. A.I can also be used for good, like movie recommendations, helping you create special recipes, and social media often uses AI to show you content that they think you are into.