How to protect repo against unwanted deletion? #186117
Replies: 2 comments
-
|
Great question , this is a real risk when using LLMs with broad automation rights. A practical, defense-in-depth approach usually works best : 1. Protect critical branches
2. Least-privilege access for automation
3. Mandatory human-in-the-loop
4. Backups and recovery
5. Guardrails in automation
Protect branches, restrict permissions, require human confirmation for destructive ops & keep backups. LLMs are powerful but they should never be trusted with irreversible actions. |
Beta Was this translation helpful? Give feedback.
-
|
An LLM should:
But not execute destructive actions autonomously. TL;DR: you don’t rely on intelligence, you rely on constraints. 1. Strict Least-Privilege Access (Non-Negotiable)Never give an LLM credentials that can:
Do instead:
2. Hard Branch & Repo ProtectionsThis protects you even from humans. Mandatory protections:
An LLM operating through Git physically cannot bypass these. 3. Human-in-the-Loop for Destructive or Global ActionsDefine a clear line:
Pattern to use:
4. Sandbox Everything (Ephemeral Environments)LLMs should:
If something goes wrong:
This is exactly how cloud providers test automation safely. 5. “Dry Run First” EnforcementForce the LLM to:
Example: git diff
terraform plan
kubectl diffIf the system can’t show you what will happen, it doesn’t get to run. 6. Immutable Audit Logs (So You’re Never Guessing)Your concern about “it’s not entirely clear what has been executed” is a red flag. You want:
If you can’t reconstruct events, you’re flying blind. 7. Separation of Roles (Critical for Teams)Use different LLM roles, not one omnipotent agent:
This mirrors real engineering orgs for a reason. 8. Backups and Repo ImmutabilityEven with everything above:
If disaster still strikes, recovery is minutes, not weeks. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Select Topic Area
Question
Body
The use of LLMs in development teams has become indispensable. These powerful AI features can perform many tasks autonomously, for hours on end, including command prompts, and unfortunately it is not entirely clear what has been executed. How do you prevent an LLM from accidentally deleting your entire online project repo or compromising versions and branches? What is the best approach?
Beta Was this translation helpful? Give feedback.
All reactions