Security

Given the state of cybersecurity today, the implementation of AI systems into the mix can serve as a real turning point. New AI algorithms use Machine Learning (ML) to adapt over time, and make it easier to respond to cybersecurity risks.

The year 2017 wasn’t a great year for cyber-security; we saw a large number of high-profile cyberattacks; including Uber, Deloitte, Equifax and the now infamous WannaCry ransomware attack, and 2018 started with a bang too with the hacking of Winter Olympics. The frightening truth about increasingly cyber-attacks is that most businesses and the cybersecurity industry itself are not prepared. Despite the constant flow of security updates and patches, the number of attacks continues to rise.

Beyond the lack of preparedness on the business level, the cybersecurity workforce itself is also having an incredibly hard time keeping up with demand. By 2021, there are estimated to be an astounding 3.5 million unfilled cybersecurity positions worldwide, the current staff is overworked with an average of 52 hours a week, not an ideal situation to keep up with non-stop threats.

Given the state of cybersecurity today, the implementation of AI systems into the mix can serve as a real turning point. New AI algorithms use Machine Learning (ML) to adapt over time, and make it easier to respond to cybersecurity risks. However, new generations of malware and cyber-attacks can be difficult to detect with conventional cybersecurity protocols. They evolve over time, so more dynamic approaches are necessary.

Another great benefit of AI systems in cybersecurity is that they will free up an enormous amount of time for tech employees. Another way AI systems can help is by categorizing attacks based on threat level. While there’s still a fair amount of work to be done here, but when machine learning principles are incorporated into your systems, they can actually adapt over time, giving you a dynamic edge over cyber criminals.

Unfortunately, there will always be limits of AI, and human-machine teams will be the key to solving increasingly complex cybersecurity challenges. But as our models become effective at detecting threats, bad actors will look for ways to confuse the models. It’s a field called adversarial machine learning, or adversarial AI. Bad actors will study how the underlying models work and work to either confuse the models — what experts call poisoning the models, or machine learning poisoning (MLP) – or focus on a wide range of evasion techniques, essentially looking for ways they can circumvent the models.