AI & OSINT

What are the benefits of AI for OSINT?

Artificial Intelligence (AI) is a technology that can make human-like informed, non-random decisions in an algorithmic way. In OSINT, AI’s benefits include being able to sort through huge amounts of data and identifying both large and complex patterns, as well as small but important anomalies. AI should not and will not replace the need for intelligence analysts. However, it will shift the analyst’s role. For example, AI can be used with web crawling applications, pulling in data from a wide variety of social media websites. AI can then sort through the information that it gathered, and highlight to analysts important information to further review. Using AI output and data, analysts will gather more information from a larger number of sources in a smaller time period. Using AI in OSINT will help analysts focus more on the question of “why” information is important, and will increase confidence and certainty with their work. An analyst’s ability to think critically and apply context is not something that AI will replace. However, if used correctly, AI should allow analysts to work at a higher level and faster pace then they would be able to without it. With the amount of information available in OSINT, this is very valuable. 

There are a number of subcategories that fall under the umbrella of Artificial Intelligence. AI is becoming a broader tool than what it has most commonly been known for in the past, AI-generated writing prompts and the creation of unique images only scratches the surface of the potential of this powerful tool. Of the list below, OSINT will likely utilize the first two subcategories for tasks such as information gathering, code writing, and preliminary analysis. 

  • Reactive Machines AI: Operates off of present data and only considers previously known or reported information. This type of AI can not make predictions as to what will happen in the future. 

  • Limited Memory AI: Can actively make decisions, but only from the information that it already has stored in its memory. This AI has a short-term, limited memory. 

  • Theory of Mind AI: Machines that aim to have the capability to understand and remember other intelligent entities' emotions and then adapt their behavior based on what they deem appropriate for the situation. This type of AI is still being developed and advanced in its accuracy and application.

  • Self-Aware AI: Machines that have their own consciousness and are considered “self-aware.” They have their own human-like needs and emotions. This type of AI has yet to be developed.

What are the downfalls of AI for OSINT  

Despite the assistance AI can provide the OSINT industry, there are also some serious downfalls. A few examples include the ability to fabricate and spread disinformation, the creation of false threats, a potential loss of information gathering skills for analysts, and enhanced quality of social engineering schemes and threat actor attacks. 

One of the biggest faults of and misuses with AI is its ability to create and spread content that is not accurate. Disinformation such as images, accounts or messages in which fact and fiction are hard to differentiate can have serious consequences. AI makes OSINT work more difficult when the task shifts from gathering information to simply trying to distinguish what information is authentic. Potential AI-generated creation of false threats could cause an analyst to shift their attention away from real ones, creating a temporary gap or vulnerability in surveillance. As an example, an AI generated of a fire near the Pentagon was briefly spread throughout social media before being identified as fake in May 2023. The repercussions of this short-lived but impactful event was felt all the way on Wall Street, causing a quick but noticeable stutter in the stock market. 

Analysts maintaining their craft and retaining their OSINT skills is one of the most important roles of their job. With AI gaining momentum and offering analysts the ability to streamline their investigations through less manual work, analysts run the risk of forgetting how to conduct specific research tasks. Per the Ebbinghaus forgetting curve, humans forget 50% of new information they learn within the first hour of learning it. This is increased to approximately 70% within 24 hours. This suggests that although an analyst may have used a specific research strategy many times, without the constant repetition of that task or an effort to remember how to perform a specific research strategy, the analyst will likely forget how to perform the task within a period of days or weeks. Potentially losing key research skills could cause an analyst to miss pieces of information and lead to a gap or vulnerability within their research.

We are already seeing the advancement of AI bring an increase in the quality of phishing and malware schemes being used by threat actors. Subsequently, an individual or company’s chances of having sensitive information stolen or otherwise misused via these AI-improved attacks will also increase. AI has the unfortunate ability to assist in creating new, believable schemes for threat actors to utilize in an attack towards an individual or company. In OSINT, this results in analysts needing to be constantly aware of what links, applications, and programs they are opening, and subsequently allowing access to, on their devices. Having the ability to spot a probable attack will also be an essential skill for OSINT analysts so they are able to quickly share that flagged item with others in their organization before it is allowed to create large-scale damage. Being consistently up to date on the current state of AI and its benefits and downfalls in OSINT will be of help to analysts and the companies they work for to be able to create and, as necessary, reevaluate relevant and helpful content for the purposes of client knowledge and awareness. 


Tips from a DU-Zel OSINT Analyst: How to spot AI on your screen! 

What can you do as an individual to become more aware of AI in your day-to-day life, and in return be less susceptible to its disinformation and potential harm?

  • Be proactive. Keep yourself updated on current phishing schemes and scams to avoid them in the first place. 

  • View live news updates only from reputable sources and then re-verify that information before believing any image or headline you see. Second opinions aren’t only a good idea at the doctor, they are good for news reports also.

  • Be careful not to use sensitive company information in an AI system, such as ChatGPT.

  • Be aware of legal restrictions placed on AI in your industry. So far there has not been much in the form of actionable government oversight and legal limitations placed on AI. However, advancements are being made at a rapid pace, so staying up to date with legalities is important.

  • Read the how-tos on detecting AI generated photos, including these four ways:

    • Check the title description/comments section of a photo for details on the source of the image.

    • Look for a watermark from an AI source in the image. Many AI generators are now implementing a watermark in their generated photos to help individuals quickly distinguish these photos from real images.

    • Look for distortions or abnormalities in the photo that shouldn’t be there, particularly around hands, feet, and limbs.

    • Utilize an AI image detector. Although not 100% dependable, AI or Not is one of these tools that does a decent job.

  • For organizations specifically, keep training up-to-date for IT and security teams so they can always be ahead of the constantly growing and changing malware/software threats. Investing up front in the training of these essential personnel will save your company future time and money.

Previous
Previous

Optimizing Event Security: The Importance of OSINT

Next
Next

Social Engineering