Anthropic has warned that even a few poisoned samples in a dataset can compromise an AI model. A joint study with the UK AI Security Institute found that as few as 250 malicious documents can implant backdoors in LLMs up to 13B parameters, proving model size offers no protection.
ByteDance’s $2.5B AI Chip Deal Tests US Export Controls
ByteDance is expanding AI capacity in Malaysia through a cloud partner, showing how overseas infrastructure can still provide access to