Talan.tech
AI Risk Check/Engineering

AI Risks in Engineering & Software

Copyright suits, license-contamination cases, supply-chain incidents, and security advisories — scored from public records.

18services

Industry overview

AI coding assistants are deployed in nearly every commercial engineering organization, and the tail of failure modes is long: training-data copyright suits, GPL contamination flowing into proprietary codebases, prompt-injection through agent tools, suggested code that imports a malicious package the model hallucinated. The risk is not that the tools fail to work — they often work. The risk is that the failure modes are subtle, lawful exposure shifts onto the deploying organization, and code-review processes calibrated for human authors miss patterns common in model output.

Key risks for Engineering

Training-data copyright and license contamination

Suits filed against major coding assistants over training on copyleft and proprietary code remain unresolved. The downstream concern: code suggestions that closely match GPL or AGPL sources, ingested into proprietary codebases without license review.

Hallucinated dependencies and supply-chain risk

Coding assistants regularly suggest package names that do not exist or do not match the API the prompt described. Adversaries have begun registering "slop-squatted" packages — malicious packages with names commonly hallucinated by popular models.

Prompt injection through tool calls and agentic workflows

Agents that read attacker-controlled content (issue trackers, web pages, file uploads) and then take actions can be steered to exfiltrate secrets, execute privileged commands, or modify production state. The exploit primitive is becoming better understood; the deployments are not.

Insecure code patterns at scale

Models reproduce the patterns in their training data, including insecure ones — credentials in source, missing input validation, deprecated cryptographic primitives. The volume of generated code makes review the bottleneck.

Regulatory surface

Surfaces: Copyright Act, software license terms (GPL/AGPL/MIT/etc.), CFAA where agents take privileged actions, FTC unfairness around insecure-by-default products, NIS2 in the EU, CISA secure-by-design expectations.

AI services tagged for Engineering

18 services

Buyer checklist

  • 1

    Training-data and output-license terms in the contract, including indemnification scope and license-contamination warranty.

  • 2

    Tooling that flags suggestions matching known copyleft sources before they merge.

  • 3

    Dependency-resolution policy that catches hallucinated and freshly-registered packages.

  • 4

    Code-review calibration for AI-authored output — assume different failure modes than human-authored.

  • 5

    Agent permissioning and audit logging for any tool that touches production or has access to secrets.

Frequently asked

Can I be sued for using an AI coding assistant?

There is no live theory of liability against the user for the act of using a coding assistant. There are live theories against organizations whose products incorporate AI-generated code that infringes copyright, contaminates a proprietary codebase with copyleft, or ships exploitable patterns. Vendor indemnification scope matters.

Are AI coding assistants safe for regulated industries?

They can be, with controls. The questions to answer in writing: where does the prompt go, what happens to it, what guarantees apply to the output, and how does code review change to catch the failure modes specific to model authorship?

Get alerts when Engineering risk scores change.

Court cases, breaches, and regulatory actions — pushed to you when they affect this industry.