AI in a Minefield: Learning from Poisoned Data
Data poisoning is one of the main threats on AI systems. When malicious actors have even limited control over the data used for training a model, they can try to fail the training process, prevent it from convergence, skewing the model or install so-called ML backdoors – areas where this model makes incorrect decisions, usually […]
Read more