building-a-bot-to-help-cyberstalking-victims

Photo courtesy Charles Deluvio (Unsplash)

All was well on the SecTor social medsia channel until we started seeing some strange tweets mentioning us in July. “we stopped in the colonnade , And went on in sunlight , into the Hofgarten“, said one. Another was a long string of apostrophes. They came from Joe Gray, a noted SecTor speaker who will be presenting at the virtual conference this year. This wasn’t the Joe we knew. What was going on? Had his account been hacked?

We admit to being a bit concerned at first. That is, until we realised that this wasn’t a Twitter hijacking or a psychic meltdown, but something intriguing springing to life. He was in the early stages of testing out the Decepticon, an AI-powered bot with a socially responsible mission.

Gray is an expert at open source intelligence (OSINT), which is a great example of a discipline that can be used for good or bad. An example of its use for good is Trace Labs, a nonprofit organisation based in Canada that uses teams of volunteers to help track down missing people by scouring the internet. Trace Labs, which only investigates a missing person when asked, punctuates these activities with regular capture the flag contests to help drive investigations forward.

An example of the bad is cyberstalking, where people try to find others without their permission. There are plenty of examples, some involving people who became obsessed with those they’d never met, and others perpetrated by ex-partners who used social media to harass their victims.

Some cyberstalkers don’t even understand that they’re doing anything wrong. Recently, Gray begged a young cybersecurity researcher to remove a blog post describing how he’d found a crush who ghosted on him a decade ago. The researcher even attempted password resets on her social media accounts to see what kind of phone she used. He has since apologised and removed the post.

“She didn’t even know that she was being stalked or abused. But someone who does know could connect Decepticon to their social media account, step away, and create a new account,” Gray says.

Escaping social media abusers

If you’re trying to escape someone, your social media account can become an attack vector. They might use it to pursue your contacts or to track your location. They could use it to harass you. Social media is one of the most prominent ways for abusers to find and reinsert themselves into their victims’ lives, Gray warns.

Staying away from social media altogether isn’t realistic for many victims. In any case, why should they have to? One strategy to escape an online abuser is to create a new account and leave the other one behind, but visibly abandoning the old account isn’t always a good idea because it could clue the attacker in, encouraging them to search for your new online identity.

The Decepticon bot keeps the old account running by posting innocuous information on it that can’t be used to trace the victim. The hope is that it will keep the abuser occupied while allowing the victim to quietly step away, perhaps providing enough breathing room for them to cover their digital and physical tracks.

To develop the algorithm, Gray worked his way through some machine learning books and started playing with Google’s open-source TensorFlow AI network. His Python-based code (available in this GitHub repo) sources tweets from Twitter’s free API, analysing the user, the text, and the tweet time. It slices up the words and converts them to integers, running them through a long-short-term memory (LSTM) model. This is a type of neural network that takes previous decisions into account. It also posts texts at the rough times and intervals that the user would normally post to stop attackers watching for changes in behaviour.

The result is a work in progress. Early versions of the bot didn’t have enough material to train on, so they used famous works from writers like Edgar Allan Poe and TS Eliot (hence the bizarre tweets we saw). The version of the code that we’ll see in October pulls from four Twitter male-operated Twitter accounts, all from people in close geographic proximity to each other, and two female-operated accounts. There’s enough diversity to make for a more realistic demo, according to Gray. In short, his bot is growing up.

“It’ll still have a certain level of uncertainty to it. But it’ll be a little bit more closely aligned, because it’s coming from real people and what real people actually say,” he says.

Will it be convincing enough to protect the vulnerable? A lot depends on how many posts are on the account being mimicked, because the code uses historical content to train its model. “If someone had access to a paid API, like maybe the Twitter Firehose API that people talk about, they could pull every tweet down and it would be far more accurate,” he says. “In this case though, we’re not looking for precision. We’re looking for accurate enough.”

If you’re interested in using OSINT for good, then Gray also gives online training (find out more at his site). For a deeper dive into how he created his algorithm and how it works, check out his SecTor talk this October. There’s still time to register here.

0

Bookmark and Share