What is the latest update in the AI safety market?
Download our beautiful pitch about the AI safety market

In our AI safety market deck, you will find everything you need to understand the market
The AI safety market is growing fast as companies race to control AI risks.
Regulation deadlines and rising incidents are pushing budgets toward safety tools.
We constantly update this blog post with the latest AI safety market updates from Q4 2025.
And if you want to better understand this new industry, you can download our pitch covering the AI safety market.
Insights
- The AI trust, risk, and security management market hit $2.34 billion in 2024 and is expanding at 21.6% annually, creating a strong foundation for broader AI safety spending through 2030.
- Red teaming services alone generated $1.36 billion in 2024, growing 29% year-over-year, which shows companies are willing to pay for hands-on safety testing beyond governance platforms.
- The EU AI Act takes full effect in August 2026, concentrating compliance budgets in a tight 18-month window and accelerating demand for audit-ready governance systems across Europe.
- Organizations now report an average of 223 GenAI data policy violations per month, demonstrating that runtime monitoring and incident response are becoming operational necessities rather than optional add-ons.
- AI governance market estimates range from $176 million to $890 million for 2024, revealing that analysts still disagree on definitions, which creates opportunity for vendors who can clearly articulate their value proposition.
- North America will represent 40% of AI safety spending in 2026, but Asia-Pacific is projected to grow faster and capture 36% of the market by 2036 as local regulations mature.
- Safety monitoring and incident response will grow from 25% of the AI safety market in 2026 to 35% by 2036, overtaking governance platforms as the largest spending category.
- If AI safety captures just 2% of the projected $300 billion in AI-centric spending by 2026, the market would reach $6 billion, validating our $5.5 billion estimate as conservative.
- The AI Incident Database logged its 1,000th incident in early 2025, and the accelerating pace of documented failures is driving enterprise demand for continuous safety operations beyond one-time assessments.

In our AI safety market deck, we have collected signals proving this market is hot right now
Summary table of the most important updates in the AI safety market
We define the AI safety market as products and services that reduce harms and failure modes from AI systems by measuring, mitigating, and governing model behavior and AI-specific risk.
We include AI evaluation and red teaming, safety monitoring and incident response, safety/guardrail layers, and AI trust–risk–security management capabilities such as governance, robustness, privacy/security controls, and explainability when used for risk reduction.
We exclude generic MLOps, general cybersecurity, and general compliance or content moderation offerings unless they are specifically designed to address AI-model behavior or AI-specific threats.
You can also get all the latest market news for the month here.
| Piece of news | Category | Exact date | Source |
|---|---|---|---|
| EU proposes Digital Omnibus that could reshape AI Act timelines and data rules | Regulations & Policies | 2025-11-19 | European Commission |
| Australia creates an AI Safety Institute to evaluate frontier capabilities and share risk info | Regulations & Policies | 2025-11-25 | Department of Industry |
| Global standards bodies issue the Seoul Statement on trustworthy AI standards | Regulations & Policies | 2025-12-02 | ITU |
| White House issues EO to push single national AI policy and curb state-level AI rules | Regulations & Policies | 2025-12-11 | The White House |
| Red Hat buys guardrails specialist Chatterbox Labs to add security for AI into its stack | M&A | 2025-12-16 | Red Hat |
| OpenAI reveals RL-powered automated red team to harden Atlas against prompt injection | Breakthrough | 2025-12-22 | OpenAI |
| California SB 53 frontier-model transparency and whistleblower rules officially take effect | Regulations & Policies | 2026-01-01 | California Legislative Info |
| LMArena raises $150M to scale independent AI evaluation as trust layer for models | Fundraisings | 2026-01-06 | PR Newswire |
| Radware discloses ZombieAgent zero-click indirect prompt injection affecting AI agent workflow | Breakthrough | 2026-01-08 | Radware |
| OpenAI publishes updated Raising Concerns policy for whistleblowing and non-retaliation | Regulations & Policies | 2026-01-12 | OpenAI |
| F5 ships AI Guardrails plus AI Red Team for continuous testing and runtime policy enforcement | Product launches | 2026-01-14 | F5 |
| International AI Safety Report 2026 lands as shared evidence base for AI risk debates | Market Research | 2026-02-03 | IAISR |
How is the AI safety market doing now?
How do we define the AI safety market?
We define the AI safety market as products and services that reduce harms and failure modes from AI systems by measuring, mitigating, and governing model behavior and AI-specific risk.
We include AI evaluation and red teaming, safety monitoring and incident response, safety/guardrail layers, and AI trust–risk–security management capabilities such as governance, robustness, privacy/security controls, and explainability when used for risk reduction.
We exclude generic MLOps, general cybersecurity, and general compliance or content moderation offerings unless they are specifically designed to address AI-model behavior or AI-specific threats.
This is also the definition we use in our report covering the AI safety market.
How big is the AI safety market in 2026?
We estimate the AI safety market will reach $5.5 billion globally in 2026.
This is not a random guess, if you want to know how we have come up with this estimate, you can read our AI safety market size analysis here.
The AI safety market in 2026 will be roughly the size of the global video conferencing market at $6 billion, which grew rapidly during similar adoption curves and regulatory pressures.
The AI safety market in 2026 is still in early stages with standards like NIST AI RMF and ISO/IEC 42001 just being adopted, and the market is very competitive with many product categories including governance platforms, evaluation tools, guardrails, and monitoring systems competing aggressively.
How fast will the AI safety market grow in the future?
We estimate the AI safety market will grow at approximately 25% per year from 2026 through 2030.
The AI safety market should reach approximately $13.4 billion by 2030, and looking ten years ahead to 2036, the AI safety market should reach approximately $51.2 billion.
The AI safety market grows faster than general IT spending but slower than pure governance tools, sitting between the aggressive governance-only forecasts at 35% to 45% and the slower broad risk management rates at 12% to 13%.

In our AI safety market deck, we answer all the common questions from investors and entrepreneurs
What does current funding activity look like in the AI safety market?
Our team, who continually updates our AI safety market pitch deck, is keeping a close eye on the market and tracking key signals.
One of those signals is fundraising activity across startups. Each month, we refresh this page with a list of startups of the AI safety market that have raised funding, and we also publish a quarterly analysis here.
Is funding momentum accelerating or cooling in the AI safety market these days?
Q4 2025 saw only 2 funding deals in the AI safety market totaling $159 million, down from Q3 2025 which had more deals, but the average deal size increased significantly because LMArena's $150 million round drove the total up.
Compared to Q4 2024, the AI safety market saw fewer deals in Q4 2025 but similar total funding amounts, showing that investors are writing bigger checks to fewer companies as the market matures.
The average deal size in Q4 2025 jumped to $79.5 million compared to previous quarters, driven entirely by LMArena's massive round, while Portal26's smaller $9 million raise shows that early-stage deals are still happening but at much lower volumes.
Which categories and business models are attracting capital in the AI safety market?
These categories and business models of the AI safety market are receiving important fundraising currently:
- AI evaluation platforms like LMArena raised $150 million to build trusted AI evaluation infrastructure, showing that measuring model behavior is now investable as standalone business with one massive deal dominating Q4 2025.
- GenAI governance platforms like Portal26 raised $9 million for adoption management and policy controls, demonstrating that governance-in-practice tools still attract early-stage capital even in a slower funding environment.
The funding in Q4 2025 shows the AI safety market is splitting into two tiers with massive platform rounds like LMArena's $150 million and smaller governance tools still raising Series A rounds in the single digits.
Who's writing the most checks in the AI safety market?
These investors are being very active when it comes to fundraising in the AI safety market:
- The investors in LMArena's $150 million round are not publicly disclosed yet, but evaluation infrastructure is clearly attracting large venture capital and growth equity investors who see it as critical AI market infrastructure.
- The investors in Portal26's $9 million Series A are not publicly disclosed, but governance platforms continue to attract early-stage venture capital focused on enterprise software and compliance tools in the AI safety market.
The AI safety market saw concentrated capital deployment in Q4 2025 with one mega-round dominating activity, suggesting that top-tier investors are picking winners in evaluation infrastructure while early-stage investors still support governance tools.
Any big acquisitions or IPOs in the last three months in the AI safety market?
These are the big acquisitions and IPOs that happened recently in the AI safety market:
- Red Hat acquired Chatterbox Labs on December 16, 2025 to add model-agnostic AI safety and GenAI guardrails into its enterprise platform, validating that guardrails are becoming a core platform feature.
The Red Hat acquisition of Chatterbox Labs shows that large enterprise software vendors are buying AI safety capabilities rather than building them, which creates exit opportunities for startups but also increases competitive pressure from bundled solutions.

In our AI safety market deck, we show you long-term trends so you can make better decisions
How are companies in the AI safety market performing overall?
We are watching this market everyday, because we need to constantly update our pitch deck. Here is a couple of things we have noticed.
Are there any standout success metrics or financial results in the AI safety market?
Unfortunately, there hasn't been any standout success metrics or impressive financial results during Q4 2025 in the AI safety market.
Have there been any major partnerships in the AI safety market?
According to our data, there hasn't been any major partnership worth mentioning during Q4 2025 in the AI safety market.
Have there been any notable technology or infrastructure breakthroughs in the AI safety market?
These are the notable technology or infrastructure breakthroughs that happened recently in the AI safety market:
- OpenAI revealed its RL-powered automated red teaming system on December 22, 2025 that continuously discovers prompt injection attacks and patches agents in a rapid loop, raising the bar for what serious agent security looks like.
- Radware disclosed the ZombieAgent vulnerability on January 8, 2026 showing how attackers can hijack agents quietly using indirect prompt injection, demonstrating that agent security is now an enterprise risk involving data and actions.
- F5 launched AI Guardrails and AI Red Team on January 14, 2026 for runtime protection and automated adversarial testing, showing that major application security vendors are treating AI safety as a first-class product category.
The breakthroughs in Q4 2025 show the AI safety market is moving from manual testing to automated continuous security operations, with both startups and incumbents building systems that discover and patch vulnerabilities at machine speed.
Have any companies restructured or shifted pricing or business model in the AI safety market?
No, there hasn't been any update in this section during Q4 2025 in the AI safety market.
Are there any other notable wins or successes in the AI safety market?
No, there hasn't been any impressive win worth talking about during Q4 2025 in the AI safety market.

In our AI safety market deck, we will give you useful market maps and grids
What is the overall sentiment in the AI safety market right now?
Are there any notable recent opinion pieces, thought leadership about the AI safety market?
No, we don't think there hasn't been any notable opinion pieces worth mentioning here during Q4 2025 in the AI safety market.
Are there any interesting and recent market research reports about the AI safety market?
These are the interesting market research reports that came out recently in the AI safety market:
- The International AI Safety Report 2026 was published on February 3, 2026 as a consolidated view of risks, methods, and the current state of safety knowledge, which often becomes the backbone of policy and procurement expectations.
The International AI Safety Report can standardize what policymakers and enterprises consider reasonable safety practice, which tends to accelerate markets once expectations crystallize and budgets follow.
Have there been any regulatory changes, policy updates, or new compliance requirements in the AI safety market?
These are the regulatory changes, policy updates, and new compliance requirements that happened recently in the AI safety market:
- The European Commission proposed the Digital Omnibus package on November 19, 2025 that could change how fast and how strictly parts of AI regulation bite if adopted.
- Australia announced the creation of an AI Safety Institute on November 25, 2025 to assess emerging AI capabilities and support timely actions to address risks.
- ISO, IEC and ITU published the Seoul Statement on December 2, 2025 outlining shared commitments for AI standardization that will likely become the checklist behind procurement rules.
- The White House issued an executive order on December 11, 2025 framing a national AI policy approach and directing actions to reduce state-law obstruction.
- California SB 53 frontier transparency and whistleblower rules took effect on January 1, 2026 pushing large developers toward standardized disclosures and protections for staff raising safety concerns.
- OpenAI published an updated Raising Concerns policy on January 12, 2026 describing how concerns can be raised and how retaliation is handled as part of the governance layer.
The regulatory activity in Q4 2025 shows the AI safety market is entering a phase where governance expectations are becoming measurable controls with whistleblowing, reporting channels, and internal escalation treated as standard safety infrastructure across multiple jurisdictions.

In our AI safety market deck, we help you understand how the market is structured
Related blog posts
- What are the latest news in the AI safety market?
- What are the latest funding news in the AI safety market?
- What is the real market size of the AI safety market?
Who is the author of this content?
NEW MARKET PITCH TEAM
We track new markets so founders and investors can move fasterWe build living “market pitch” documents for emerging markets: from AI to synthetic biology and new proteins. Instead of digging through outdated PDFs, random blog posts, and hallucinated LLM answers, our clients get a clean, visual, always-updated view of what’s really happening. We map the key players, deals, regulations, metrics and signals that matter so you can decide faster whether a market is worth your time. Want to know more? Check out our about page.
How we created this content 🔎📝
At New Market Pitch, we kept seeing the same problem: when you look at a new market, the data is either missing, paywalled, or buried in 300-page reports that feel like they were written in the 80s. On the other side, LLMs and random blog posts give you confident answers with no sources, and sometimes they just make things up. That’s not good enough when you’re about to invest real money or launch a company.
So we decided to fix the experience. For each market we cover, we build a structured database and update it on a regular basis. We track funding rounds, fund memos, M&A moves, partnerships, new products, policy changes, and the real activity of startups and incumbents. Then we turn all of that into a clear “market pitch” that shows where the opportunities are and how people actually win in that space.
Every key data point is checked, sourced, and put back into context by our team. That’s how we can give you both speed and reliability: fast coverage of new markets, without the usual guesswork.