As companies across the globe race to fortify their cybersecurity defenses, they’re increasingly finding themselves navigating a complex maze when it comes to security testing. The past decade of innovation has produced an ecosystem now booming with countless tools, yet aligning these tools together, and avoiding tool sprawl, is proving to have its own set of challenges and vulnerabilities.
At a recent security summit, Rob Cuddy, Solution Architect and Application Security Evangelist at HCLSoftware, observed that a CISO at a large healthcare organization championed a ‘best of breed’ approach for each security discipline, such as network management, identity, and access management, threat intelligence and so on. But this approach often carries a lack of a standardized approach and often causes problems in many organizations.
The CISO summarized the problem well when they stated, “The problem with that approach is we never stopped to look at whether the tooling we already had addressed our issues.”
While best-of-breed tools are effective in their respective domains, today companies need help presenting a comprehensive view of risk management status. When you have this problem it is difficult to report to a board as to where the most significant vulnerabilities are and what steps to take to address them, according to Cuddy.
“What I’m seeing a lot of CISOs are struggling with, and trying to do today, is they’re getting asked to come into a boardroom and justify the budget, or say what we need to do for next year. And what they want to be able to say is, ‘Hey, today, we are 25%, likely to have a million-dollar breach in the next six months. But if we do these three things, that risk goes down to 5%. And they want to know what those three things are.’”
Many organizations are reconsidering their previous approach of spreading their budget thinly across various security areas. They’re now contemplating which areas warrant more attention – should their focus be on fortifying AppSec? Or, is the need more urgent in the realm of endpoint management? Perhaps a greater emphasis should be put on improving developers’ threat modeling skills to enable superior design outcomes.
“Now you have things like Azure DevOps, and you have plugins and organizations like HCLSoftware that are trying to write end-to-end tooling to tie it all together so that you can get one view of it. I think this is also why value stream management is starting to get popular because people want the one view of all of that,” Cuddy said. “Tool sprawl is not at all unique to security. But I think it shows up really well there.”
One way to gain greater visibility across the application security landscape as a whole is to implement interactive application security testing (IAST). IAST serves as a monitor for security and provides a great way to include security as part of overall quality. Cuddy said he’s seeing the conversation about this kind of testing evolve at many of the big testing conferences today like STARWEST and the DevOps Enterprise Summit.
“Let’s imagine you’re doing functional testing, in particular, because this is great for that [IAST]. You’re exercising the application, you’re testing out scenarios in many cases manually, for the things that are just harder to write a script for. So when you have that, and these guys are exercising the code under normal conditions, what IAST is doing is analyzing the traffic, and anything that identifies as malicious or potentially risky, it’s flagging,” Cuddy said. “And so basically, you’re getting security testing with your functional testing for free.”
There’s no learning curve for the QA person because they’re doing what they usually do, but now, a little monitor is running in the background that will flag stuff right away. This information can then be included as part of an organization’s overall view of quality.
HCL AppScan on Cloud (and soon HCL AppScan 360º) offers the ability to take some of the results from IAST and correlate them with static testing, and dynamic testing and correlate the results together in one platform. Because the results are seen in relation to one another, one can see more clearly which vulnerabilities are more critical and exploitable, making it easier to prioritize and leverage limited resources for fixing them.
“If I find a vulnerability through static testing, maybe it’s through data flow or taint analysis and you want me to fix it, well as a developer, I need to know the threat vector that caused it. So I may know the code, but I need to know what was the attack that actually caused this to happen. Well flip that coin around: If you’re only doing dynamic testing, great, you get the threat vector, but you have no idea where the code is. So we need a way to correlate those together to give people a better way to target the fixes. And that’s where we leverage IAST, so those things all start working together,” Cuddy explained. “If I’m seeing an issue in both static and interactive, that means that’s absolutely exploitable.”
The need for visibility, transparency, risk understanding, and security are paramount throughout the SDLC
In the world of software development, the landscape has undergone significant shifts over the years, leading to both standardization and diversification of practices. In the past, organizations followed top-down mandates for tool usage, with build and release engineers writing scripts to integrate various tools.
However, these tools often became burdened with additional functionalities beyond their intended purpose, resulting in process inefficiencies. To address these challenges, the concept of component-based development emerged, promoting the breaking down of applications into smaller, manageable pieces. This shift towards agility and faster delivery created a disparity between the speed of development and the ability of operations to keep up.
“So you have this big pendulum swing from standardization to the developer is king, and whatever they want to work with, that’s what we’re gonna use, because the teams are small. Well, that worked for a while. And then you started to have the pendulum swing back a bit to where, okay, we still need visibility, we still need transparency, we still need to understand risk. And security kind of stayed in that sort of standardized mode of, well, it’s a separate silo. Like, if you’re in development, we have no idea what those guys are doing. They just come and bug us whenever there’s a critical vulnerability that needs to be dealt with,” Cuddy explained.
As DevOps gained momentum, people started to realize that the best organizations were the ones that were mixing in good secure design up front and they had elements of security testing throughout, so that they were releasing not only high-quality code in the way that we think of it traditionally but high-quality code that was also safe, according to Cuddy.
HCL AppScan 360º offers a comprehensive solution in your data center
HCL AppScan 360º offers the same unifying functionalities, engine, and utilities that are offered in AppScan on Cloud, but now available in one’s data center.
Ever since data privacy regulations like GDPR and CCPA were enforced, many came with some kind of geographic boundary description.
“The data for the citizens in those countries cannot leave those countries’ borders. So if you’re doing a SaaS solution that gets really interesting if you don’t have a data center within those borders. And so that was the problem,” Cuddy said.
The system is Dockerized and containerized for easy deployment, ensuring that updates can be seamlessly obtained alongside the company’s regular updates. This approach mirrors the ease of use experienced with their public cloud services, simplifying the setup and execution processes for users.
Currently, the system has been launched for static testing, with plans to expand its capabilities to include dynamic and interactive elements and SCA (Software Composition Analysis) over the coming months. This expansion will provide users with even greater flexibility and the ability to import various features as needed, Cuddy added.