These engineers are developing artificially intelligent hackers
March 6, 2016
Shah Sheikh (1294 articles)
Share

These engineers are developing artificially intelligent hackers

Could you invent an autonomous hacking system that could find and fix vulnerabilities in computer systems before criminals could exploit them, and without any human being involved?

That’s the challenge faced by seven teams competing in Darpa’s Cyber Grand Challenge in August.

Each of the teams has already won $750,000 for qualifying and must now put their hacking systems up against six others in a game of “capture the flag”. The software must be able to attack the other team’s vulnerabilities as well as find and fix weaknesses in their own software – all while protecting its performance and functionality. The winning team will walk away with $2m.

“Fully automated hacking systems are the final frontier. Humans can find vulnerabilities but can’t analyse millions of programs,” explained Giovanni Vigna, a professor of computer science at University of California Santa Barbara, speaking at the RSA security conference in San Francisco.

Vienna is also the founder of hacking team Shellphish, which has built one of the systems – dubbed Mechanical Phish – that will compete in the Cyber Grand Challenge.

“Hacking is usually just a bunch of guys around a table who are very tired just typing on a laptop,” Vigna adds, adding that it’s “not as sexy” as hacking portrayed in movies. “We do this because we either want to attack somebody, hack defensive to find bugs before they are deployed, or because it’s fun.”

Robo-hackers could be incredibly useful for organizations trying to defend their network to quickly identify and patch problems before anyone exploits them to either steal data or disrupt online services – without having a team of highly skilled human “uber-hackers” in house.

Outside of the Cyber Grand Challenge, other groups are working on hacking machines powered by artificial intelligence.

Konstantinos Karagiannis, chief technology officer of BT Americas, has been building a hacking system that uses neural networks to simulate the way the human brain learns and solves problems.

He described how an artificially intelligent program called MarI/O was able to learn an an entire level of Super Mario World in just 34 tries – with no prior knowledge. The software wasn’t taught anything about how to play the game – it simply had a few simple parameters set. MarI/O just tried different things it “thought” would work and when they did, it “learned”.

“Using this approach a security scanner could identify intricate flaws using creative approaches you would have never thought of,” explained Karagiannis. “And it can be written with very modest hardware. A $1,000 GPU [graphics processing unit, typically used in gaming] can outrun a supercomputer that used to fill a building 10 years ago.”

Karagiannis hopes to demonstrate a proof-of-concept by the summer of 2016.

While robo-hackers could provide security professionals with a valuable weapon in their armoury, the risk is that they could fall into the wrong hands. Karagiannis told us that he wouldn’t be surprised if criminal hackers had appropriated these techniques “within a year”.

Alex Rice, co-founder of security company HackerOne, agrees. “Anything that can be used to defensively find vulnerabilities can be used by criminals – they all end up becoming a double-edged sword,” he told the Guardian.

Despite this, Rice thinks the rise of automation in security is a good thing. “Everybody is struggling to keep up. There’s not a single organization that hasn’t had a compromise that was life-threatening, so clearly everything we’re doing is failing.”

The best solution is to combine the skills of humans with machines. “Humans are much better at what we haven’t figured out yet,” he said.

“Until we have fully sentient machines, they still have to be instructed by humans.”

Source | The Guardian