Generative AI and Scams

If you were to receive a phone call in the middle of the night from a loved one desperately needing money quickly to get out of a bad situation, maybe your child, sibling, or grandchild, would you send it? If they said please Mama, I need help! Would you?


Generative AI, such as ChatGPT, has been readily accessible to the public for almost one year now and as of the end of September 2023, ChatGPT has access to the internet in real time. This access will continue to improve the machine’s results. While generative AI has many positive use cases, our current biggest concern is the effect it will have on social engineering and scams. As of April 2023, Darktrace noted that social engineering attacks that use generative AI had risen by 135% since ChatGPT became publicly available. 


Generative AI is artificial intelligence that can generate text, images, videos, sound, and other content based on the data the tool was trained on. Previous AI models and services did not publicly create the type of content generative AI can now easily create. Additionally, many of these services have a low barrier to entry, making them a cheap and easy to use tool for threat actors. 


Generative AI can aid several types of common phishing attacks, including:

  • Email phishing: Threat actors can use generative AI to replicate professional emails to create an effective phishing email, which will possibly bypass existing spam filters. 

  • Smishing (text phishing): Threat actors can use generative AI to create realistic texts without garbles or language errors. 

  • Vishing (voice phishing): Threat actors can create a realistic voice fake with as little as 3 seconds of voice recording. (Note: According to McAfee, 53% of adults have provided enough voice recordings on their social media to be at risk of their voice being generated by AI.)


This technology enhances the need for individuals to be vigilant and conduct due diligence prior to sending money, granting access, or doing other things based on an email, text, or phone call. So how can you spot an AI generated scam or social engineering attack? Our first step is to always pause and then consider: 


  1. Is the desired information or outcome time-sensitive or under the guise of crisis? 

  2. Is the communication from a different phone number or unusual means of communication, such as a messaging app?

  3. Does the message seem generic or not intended for you? 

  4. If they are requesting money, is it through unusual means, such as cryptocurrency, gift cards, or international wire transfers? 

  5. Does the request make sense? 


If you are concerned about your exposure to these types of attacks or would like to learn more about how to protect yourself, please send us a message

Previous
Previous

SOCMINT

Next
Next

Personal Security Travel Tips