Talan.tech

Claude

by Anthropic · San Francisco, CA

Claude is a safety-focused AI assistant developed by Anthropic.

Risk Score: 21/100 (Moderate) · 12+ incidents · Legal 31 · Safety 25 · Privacy 18 · Regulatory 20 · Security 0

Risk Score

21/ 100
Moderate Risk

Apr 16, 2026

Risk Score Breakdown

Legal Risk

Court cases & lawsuits

31/100

Safety Risk

Incidents & harm events

25/100

Privacy Risk

Breaches & GDPR actions

18/100

Regulatory Risk

FTC, EU enforcement

20/100

Security Risk

CVEs & vulnerabilities

0/100

Incident Timeline

12 total incidents · showing 5 most recent

April 2026

5 incidents

Apr 2026

LOWData BreachACTIVE

The Hacker News: OpenAI Launches GPT-5.4-Cyber with Expanded Access for Security Teams

OpenAI on Tuesday unveiled GPT-5.4-Cyber, a variant of its latest flagship model, GPT‑5.4, that's specifically optimized for defensive cybersecurity use cases, days after rival Anthropic unveiled its own frontier model, Mythos. "The progressive use of AI accelerate

#hackernews #security #breach

Apr 2026

HIGHData BreachACTIVE

The Hacker News: Your MTTD Looks Great. Your Post-Alert Gap Doesn't

Anthropic restricted its Mythos Preview model last week after it autonomously found and exploited zero-day vulnerabilities in every major operating system and browser. Palo Alto Networks' Wendi Whitmore warned that similar capabilities are weeks or months from prolifera

#hackernews #security #breach

Apr 2026

HIGHData BreachACTIVE

The Hacker News: Anthropic's Claude Mythos Finds Thousands of Zero-Day Flaws Across Major Systems

Artificial Intelligence (AI) company Anthropic announced a new cybersecurity initiative called Project Glasswing that will use a preview version of its new frontier model, Claude Mythos, to find and address security vulnerabilities. The model will be

#hackernews #security #breach

Apr 2026

HIGHCourt CaseACTIVE3:26-mc-80104

Court Case: In re Subpoena to Anthropic, PBC

District Court, N.D. California | #3:26-mc-80104 | Parties: X.AI LLC v. OPENAI, L.L.C. v. Anthropic PBC v. OpenAI OpCo, LLC | Cause: Civil Miscellaneous Case | Nature: 890 Other Statutory Actions | Judge: Peter H. Kang

Court: District Court, N.D. California#courtlistener #lawsuit #court-case

Apr 2026

LOWSafety IncidentACTIVE

AI Incident Database: Are China’s ‘AI tigers’ cheating? US rival Anthropic alleges some are

United States artificial intelligence firm Anthropic is accusing three prominent Chinese AI labs of illegally extracting capabilities from its Claude model to advance their own, claiming it raises national security concerns. The Chinese un ... (https://incidentdatabase.ai/cite/13

#aiid #ai-incident #safety

Frequently Asked Questions

What is Claude's AI risk score?

Claude has an AI Risk Score of 21/100 (Moderate Risk). This score is calculated from 12+ documented public incidents across legal, safety, privacy, regulatory, and security categories.

Is Claude safe to use?

Claude by Anthropic has a moderate risk profile based on public data. Organizations should review the full incident list and conduct their own due diligence. This score does not constitute legal advice.

Does Claude have lawsuits?

Yes — our public records show 1 court case(s) for Claude, including: Court Case: In re Subpoena to Anthropic, PBC.

How is the AI Risk Score calculated?

Scores are weighted across 5 categories: Legal (25%), Safety (25%), Privacy (20%), Regulatory (15%), Security (15%). Each incident is scored by severity and type, then decayed based on age. Active lawsuits and fatal incidents do not decay.

Stay ahead of AI risk

Get alerts when Claude risk score changes

New lawsuits, breaches, and regulatory actions — delivered to your inbox.