Your AppSec Tools Can’t Protect AI – Here’s The Only Real Solution According To Noma Security CEO Niv Braun by Roberto Popolizio
Roberto Popolizio
Published on: April 18, 2025
Everyone’s racing to adopt AI, but they are securing it like any other software application. And that’s where everything breaks down.
Tools don’t fit. Workflows don’t match. Risks go unchecked.
This happens because traditional security solutions fail to provide protection across the full AI lifecycle, from development to production, disrupting engineering processes and leaving major gaps in visibility and governance.
In this interview with SafetyDetectives, Niv Braun, Co-Founder and CEO of Noma Security, explains why this blind spot is quickly becoming the biggest liability in enterprise security, and how Noma’s platform tackles it by consolidating AI supply chain security, AI security posture management, and AI runtime protection into one system that allows security teams to work in parallel with engineering teams without getting in their way.
Why AppSec Tools Can’t Protect AI
There’s a critical blind spot in securing the AI lifecycle — and it’s one that traditional application security tools weren’t built to cover.
Most security solutions today fail to provide protection across the full AI lifecycle, from development to production. Instead, they disrupt engineering workflows and leave major gaps in visibility and governance.
That’s why we built a platform that consolidates AI supply chain security, AI security posture management, and AI runtime protection into one system. It allows security teams to work in parallel with engineering teams — without getting in their way — and gives organizations the confidence to scale AI while maintaining strong security posture.
A Growing Problem No One Seems To Recognize
We’ve already seen major security incidents in the field. Enterprises are being compromised by attackers who exploit the lack of governance around AI supply chains and the lack of control over models in production.
The AI lifecycle isn’t like traditional software development: it involves different R&D processes, driven by data scientists, ML engineers, and other specialized teams. It also relies on technologies such as data pipelines and MLOps tools, and comes with its own category of technical vulnerabilities that exist at the model level, not just in the surrounding application.
As organizations rush to embed AI into mission-critical systems, this oversight is only getting more dangerous. And yet, most teams still rely on fragmented tools or traditional security stacks that were never meant to handle these types of systems.
The Innovation-Security Dilemma
The companies most affected are those using AI for competitive advantage — especially in sectors like financial services, B2B software, healthcare, and retail. Regulated industries in particular are at even greater risk due to potential compliance violations.
The challenge is twofold:
- First, there’s often a disconnect between security and AI/ML teams. Security professionals aren’t typically familiar with AI workflows, while data scientists tend to prioritize innovation over security.
- Second, traditional security tools simply weren’t designed for the way AI is built and deployed. They don’t account for the specialized processes and vulnerabilities introduced by machine learning models and pipelines.
This leaves organizations in a difficult position: either slow down AI innovation to stay secure, or move forward at full speed and accept undefined exposure in their most critical applications.
A Different (Better) Approach To Securing AI
Many companies are trying to address the problem, but the current approaches are flawed. Some tools only focus on AI runtime security. Others handle AI supply chain risks. The result is a patchwork of disconnected solutions that don’t offer unified visibility and require complex deployments.
This not only makes it harder for security teams to do their job — it also creates friction with engineering teams, slowing everything down.
We knew a different approach was needed — one that covers the full lifecycle and integrates natively with existing AI/ML tools and workflows.
What Needs to Happen Now
The industry is approaching a turning point. As regulatory pressure increases and high-profile breaches emerge, organizations will have to stop treating AI security as an extension of AppSec and start building for it on its own terms.
The path forward requires:
- Visibility into the full AI footprint
- Collaboration between security and AI/ML teams
- Lifecycle-based controls that work across the build and deployment pipeline
That’s why we built Noma.
Because what worked for software doesn’t work for AI.
And the longer organizations wait to adapt, the more they’ll have to lose.
Connect with Niv Braun and Noma Security
LinkedIn: https://www.linkedin.com/in/niv-braun/
Website: https://noma.security