Google Experiments With Using AI to Flag Phishing Threats, Stop Scams

[ad_1]

To see if AI can help stop cyberattacks, Google recently ran an experiment that used generative AI to explain to users why a phishing message was flagged as a threat.At the RSA Conference in San Francisco, Google DeepMind research lead Elie Bursztein talked about the experiment to highlight how today’s AI chatbot technology could help companies combat malicious hacking threats. According to Bursztein, most of the malicious documents Gmail currently blocks, around 70%, contain both text and images, such as official company logos, in an effort to scam users. 

(Credit: Michael Kan/PCMag)

The company experimented using Google’s Gemini Pro chatbot, a large language model, (LLM) to see if it could spot the malicious documents. Gemini Pro was able to detect 91% of the phishing threats, though it fell behind a specially trained AI program that had a success rate of 99% while running 100 times more efficiently, Bursztein said.Hence, using Gemini Pro to detect phishing messages doesn’t appear to be the best use of an LLM. Instead, today’s generative AI excels at explaining why a phishing message has been detected as malicious, rather than merely acting as a phishing email detector, Bursztein said.

(Credit: Michael Kan/PCMag)

As an example, Bursztein showed a Google LLM analyzing a malicious PDF document disguised as a legitimate email from PayPal. The company’s AI was able to point out that the phone number in the document didn’t match official PayPal support numbers. In addition, the AI noted that the language in the PDF tried to create a sense of urgency, a tactic scammers often use on potential victims.“That gives you an example of where I think the model will shine a lot, which is providing an almost analyst-like ability,” Bursztein said in a video accompanying his RSAC talk.  

(Credit: Michael Kan/PCMag)

For now, Google is merely experimenting with the capability, Bursztein told PCMag after his RSAC talk. “We thought it was cool to show. And people like it a lot,” he said. “If it gets you excited that was the goal. The goal was to show what is possible today and give people a model, that’s as much as I can say. There is no specific product announcement.”One reason why Google is probably holding off is that running a LLM requires a great deal of computing power. During his RSAC talk, Bursztein’s presentation noted that using “LM at Gmail scale [was] infeasible but great for small scale.”

Recommended by Our Editors

In addition to fighting phishing threats, Google has also been investigating whether generative AI can be used to find and automatically patch vulnerabilities in software code. But so far, the company’s research has found that LLMs struggle at vulnerability detection, Bursztein said. He attributed this to training data to be “noisy” and full of variables, which can make it hard for an LLM to identify the exact nature of a software flaw. To underscore this, Bursztein said Google ran an internal experiment last year, which involved using an LLM to patch 1,000 C++ software bugs. However, the model was only able to successfully patch 15% of the vulnerabilities. In other cases, the model did the opposite and introduced code that broke the program or resulted in other problems. On the plus side, Bursztein said LLMs performed well when it came to helping human workers generate an incident response report quickly when a cyberattack was detected within a network. The company’s internal experiment found that the generative AI tech could speed up the writing of such reports 51% of the time when the initial incident report was based on a draft created by a large language model.

(Credit: Michael Kan/PCMag)

Like What You’re Reading?
Sign up for SecurityWatch newsletter for our top privacy and security stories delivered right to your inbox.

This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.

[ad_2]

We will be happy to hear your thoughts

Leave a reply

Megaclicknshop
Logo
Compare items
  • Total (0)
Compare
0
Shopping cart