Jump to content

Researchers Claim They Bypassed Cylance's AI-Based Antivirus


Recommended Posts

Researchers at Australia-based cybersecurity firm Skylight claim to have found a way to trick Cylance’s AI-based antivirus engine into classifying malicious files as benign.

Cylance, which last year was acquired by BlackBerry and is now called BlackBerry Cylance, told SecurityWeek it has launched an investigation to determine if the researchers’ findings are valid or if their method works as a result of a misconfiguration of the product.


Artificial intelligence and machine learning are increasingly used by cybersecurity products, often being advertised as a solution to many problems, and even described by some as a silver bullet. However, Skylight researchers claim to have demonstrated that AI-based threat detection can be bypassed by malicious actors.


The experts reverse engineered the Cylance antivirus engine and identified what they described as a bias towards an unnamed video game. Researchers believe that Cylance products may be giving special treatment to files associated with this game due to its popularity.


They discovered that taking specific strings from the game’s main executable and appending them to the end of a known malicious file causes the security product to classify it as harmless.


“We chose Cylance for practical reasons, namely, it is publicly available and widely regarded as a leading vendor in the field,” Skylight said in a blog post. “However, we believe that the process presented in this post can be translated to other pure AI products as well.”


Skylight has conducted tests on known hacking tools such as Mimikatz, ProcessHacker and Meterpreter, and malware such as CoinMiner, Dridex, Emotet, Gh0stRAT, Kovter, Nanobot, Qakbot, Trickbot and Zeus.


It achieved a success rate of over 83% in bypassing the Cylance engine when tested against 384 malicious files. The score assigned by the product to the files in many cases shifted from -900, which indicates that the file is clearly malicious, to 900, which indicates that the file is harmless.


“The concept of a static model that lasts for years without update may hold theoretically, but it fails in the arena,” Skylight explained. “Granted, it is harder to find a bias in an AI model than to bypass a simple AV signature, but the cost of fixing a broken model is equally expensive.”


“We believe that the solution lies in a hybrid approach. Using AI/ML primarily for the unknown, but verifying with tried and tested techniques used in the legacy world. This is really just another implementation of the defense in depth concept, applied to the endpoint protection world,” the company added.


Skylight made its findings public without giving BlackBerry Cylance the chance to investigate the issue, but the company has not released detailed technical information to prevent abuse.


“We did not consider this to be a software vulnerability, rather a bypass, for which disclosure is less common,” Shahar Zini, the CTO of Skylight, told SecurityWeek. “Also we had no intention of making the information required to actually bypass Cylance public anyway. In any event, Cylance have been provided with the required information for the fix.”

BlackBerry Cylance said in an emailed statement that it’s “aware that an unvalidated potential bypass has been publicly disclosed by researchers without prior notification.”


“Our research and development teams are looking into whether the issue is a true bypass or due to some misconfiguration of the product on the researchers part or other similar factors. If the bypass is determined valid, remediation efforts will occur immediately. More information will be provided as soon as it is available,” the company stated.


Gregory Webb, CEO of Bromium, a company that provides malware protection through application isolation, commented, “The breaking news on Cylance really draws into question the whole concept of categorizing code as ‘good or bad’, as researchers were able to just rebadge malware as trusted – they didn’t even have to change the code. This exposes the limitations of leaving machines to make decisions on what can and cannot be trusted.”


“Ultimately, AI is not a silver bullet, it’s just the latest craze in doing the impossible – i.e. predicting the future,” Webb added. “While AI can undoubtedly provide valuable insights and forecasts, it is not going to be right every time and will always be fallible; ultimately predictions are just that, predictions, they are not fact. As this story shows, if we place too much trust in such systems' ability to know what is good and bad we will expose ourselves to untold risk – which if left unattended could create huge security blind spots, as is the case here.”


Link to comment
Share on other sites

  • Views 333
  • Created
  • Last Reply


This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
  • Create New...