Wanting to be helpful, you might go ahead and click the link to help your colleague. That would be a mistake. The message is an example of a cyberattack called social engineering.
These sophisticated attacks take steps to manipulate users into making security mistakes or giving away sensitive information. First, the attacker investigates the intended victim to gather necessary information, such as where she or he works and where there might be potential points of entry and weak security protocols. Then, the attacker attempts to gain the victim’s trust, perhaps by referencing a shared experience or saying they’ve met before, as in the message above.
Raytheon BBN is working on a solution for social engineering attacks, called SIENNA – Strategies for Investigating and Eliciting Information from Nuanced Attackers – that combats these attacks while simultaneously trying to gather information about the attacker’s identity.
SIENNA is Raytheon BBN’s contribution to the DARPA Active Social Engineering Defense, or ASED, program, which is looking for ways to automatically identify, disrupt and investigate social engineering attacks. SIENNA uses a conversational AI framework to power chatbots, AI agents that engage with bad actors to waste the attackers’ time by generating relevant, human-like content. The bot has a large pool of content that it can generate to avoid repetition and will continue engaging for as long as the attacker stays responsive.
James Ryan, Ph.D., principal investigator on the SIENNA program at Raytheon BBN, said, “Our collaborators on the ASED program have trained machine-learning models that recognize traits of social engineering attacks, including aspects of the message language and the provenance of the sender's email domain.”
SIENNA bots masquerade as the intended victim to “converse” with the attacker, wasting as much of their time as possible and gathering clues about their identity that can be turned over to an ISP or law enforcement. The conversation might go like this: