It doesn't take much to poison a large language model. Researchers found it can happen if even just 0.001% of an LLM's data is fake, ...