2024 Calendar

Cisco and Building a Tame Attack AI

Cisco and Building a Tame Attack AI

By Rob Enderle May 5, 2023 

Last week, Cisco had an interesting session about the significant amount of AI development that is going into its networking gear. Cisco is moving aggressively from security solutions that are reactive to security solutions that take initiative to protect the data from where it originates until it arrives at its destination. 

This AI model that surrounds the company’s ThousandEyes and vAnalytics efforts looks at behavior and flags anything that looks unusual in much the same way that AI solutions like BlackBerry’s Cylance do on a variety of platforms, suggesting the two solutions might work well together.

However, this move to initiative-taking solutions creates a testing problem because existing penetration testing works by using known methods of attack, whereas initiative-taking solutions are designed to address zero-day attacks that are unique and new.

These new tame attack AIs hold a lot of promise, but they also have a significant problem that will need to be addressed.

The AI white hat

AI-based security will increasingly need to be ready to face AI-based malware and exploits as individuals and companies that write these classes of code advance to newer and more effective technologies. Much like IBM has argued for its Quantum Defense solution anticipating the massive security exposures quantum technology represents, AI-based solutions will need to develop defenses for weaponized AI.

The reason you want an AI penetration testing solution is so that you look for gaps in your AI security solution. Since you, as the developer, should know where your solution is most vulnerable, you can build an attack AI to specifically target those vulnerabilities to see if the solution can withstand them.

And you might run this penetration AI solution whenever you have extra bandwidth to make sure you are best prepared for the next novel attack vector or product. But this AI then becomes a huge danger.

The danger of an AI white hat becoming a black hat

But this then results in an AI product that is expert at penetrating your AI security solution, which means that the security over this offering from access to physical theft must be absolute. If it isn’t and this penetration testing tool makes it a hostile entity, they could use what the tool knows about the site or product weaknesses to breach security.

With generative AI capable of analyzing and removing any constraints on this AI penetration tool, just putting controls and limitations in the code may not work. You may need to deploy this tool on resolute and locked hardware with provisions that make it far harder to steal or copy.

And clearly, this should be a closed-source product to ensure that the secrets it contains are very hard, if not impossible, to extract. An alternative might be for Cisco to provide the penetration testing capability as-a-service so the code never leaves Cisco’s sites as the escape or theft of the code into the wild would create a unique exposure just for Cisco offerings. Finish reading at Techspective

Rob Enderle

President and Principal Analyst at Enderle Group

As President and Principal Analyst of the Enderle Group, Rob provides regional and global companies with guidance in how to create credible dialogue with the market, target customer needs, create new business opportunities, anticipate technology changes, select vendors and products, and practice zero dollar marketing. For over 20 years Rob has worked for and with companies like Microsoft, HP, IBM, Dell, Toshiba, Gateway, Sony, USAA, Texas Instruments, AMD, Intel, Credit Suisse First Boston, ROLM, and Siemens.