What is LLM Poisoning?
What is LLM Poisoning? LLM Poisoning ( (Large Language Model) ) is when someone intentionally feeds misleading, false, or harmful data into the training process of an AI model. This “poisons” the model, causing it to generate incorrect, biased, or dangerous responses—like teaching a parrot lies so it repeats them. How Does It Work? Imagine […]