
Just 250 Documents Can Backdoor LLMs of Any Size
New research shows that with just 250 poisoned samples, attackers can implant backdoors in LLMs, regardless of model size or data volume—challenging prior security assumptions.
Latest insights, techniques, and industry knowledge from cybersecurity experts and Unit 8200 veterans