Elon Musk told a federal courtroom in California this week that his artificial intelligence startup, xAI, has utilized models from OpenAI to enhance its own systems. His remarks emerged during testimony in a case that is drawing significant attention to how AI companies build and train their models, particularly regarding the practice known as model distillation.
What Is Model Distillation?
Model distillation is a method where one AI model helps train another. While this technique is widely employed across the tech industry, it has also raised concerns about whether companies are copying or benefiting from rivals' technology without explicit permission. Musk's comments have added to the ongoing debate surrounding the boundaries of using others' AI systems.
Elon Musk's Courtroom Statements
During questioning, Musk explained that model distillation involves using one AI model to train another. When asked directly whether xAI had used OpenAI's technology in this manner, he appeared to avoid a definitive answer, stating that “generally all the AI companies” engage in such practices. When pressed further, Musk responded, “Partly.” He added, “It is standard practice to use other AIs to validate your AI.”
Growing Debate Around AI Training Practices
Model distillation has become more prevalent in recent years, but it has also sparked intense debate across the AI industry. The primary concern is whether such practices cross legal or ethical boundaries, especially when companies utilize rival systems. Firms like OpenAI and Anthropic have accused some companies, including Chinese AI labs, of using distillation to copy their models. OpenAI has raised concerns about DeepSeek, while Anthropic has named DeepSeek, Moonshot AI, and MiniMax.
Meanwhile, Google has taken steps to block what it calls “distillation attacks,” describing them as “a method of intellectual property theft that violates Google’s terms of service.” In a blog post, Anthropic stated, “Distillation is a widely used and legitimate training method. For example, frontier AI labs routinely distill their own models to create smaller, cheaper versions for their customers. But distillation can also be used for illicit purposes: competitors can use it to acquire powerful capabilities from other labs in a fraction of the time, and at a fraction of the cost, that it would take to develop them independently.”
The case continues to unfold as the AI industry grapples with the implications of model distillation and the boundaries of fair use in artificial intelligence development.



