
Artificial intelligence, or AI, is one of the most efficient technical advancements of the digital age, yet there’s still room for improvement. Could engineers revolutionize AI technology by using an existing model to train others.
Can AI really train other AI?
Simply put, AI can train other AI. Since development processes are famously tedious, it would be helpful. Some experts believe it could be the key to producing artificial general intelligence, a theoretical point where algorithms surpass human intelligence.
How does an AI train other AI?
The logistics of AI training are relatively simple on a surface level — new models essentially use pre-trained versions as a blueprint. They learn from a data set a separate algorithm has already interpreted, rapidly speeding up the entire process.
For instance, transfer learning involves reusing AI for specialized tasks. A standard approach involves engineering teams extracting features from a pre-trained algorithm, replacing only the final layer. They tailor it to the role they want the new model to complete.
Although many methods are complex, transfer learning is reasonably basic. For example, an existing generative model could train a new one to identify low-level shapes or patterns. From there, people could tailor it to recognize human faces. Since the process is quick, it works well for those who need to scale AI production without expending significant resources.
Has anyone successfully trained AI with AI?
There are multiple successful AI-to-AI training instances. For example, in Ontario, University of Guelph research scientists designed and developed a hypernetwork — an advanced AI that takes milliseconds to identify optimal parameters for an untrained deep neural network.
Traditionally, the standard has been to leverage massive data sets during a lengthy training period in an approach known as stochastic gradient descent. Since it’s virtually impossible to follow the logic of such a model, engineering teams must minimize error potential upfront. Still, even this method requires an existing multi-layered structure of artificial neurons to begin training.
Since building a deep neural network is incredibly complex and time-consuming, it’s not feasible to train, validate and test multiple models. That is, at least, until the introduction of the hypernetwork.
When they tested its prediction capabilities on 500 random neural network architectures to compare it to stochastic gradient descent, it performed just as well — if not better. Since engineers can set their new model’s parameters and optimize without a trial-and-error process, it theoretically eliminates the need for conventional training.
Are there risks to training AI with AI?
AI has been training other AI for longer than people realize — but not in the way most would assume. Businesses often hire freelancers to train their models on complex tasks. They don’t typically pay well and require incredibly high output, so many workers turned to generative AI to optimize their performance.
Research scientists from the Swiss Federal Institute of Technology studied this phenomenon, finding up to 46% of the freelancers they hired used AI to complete their assignments. While this practice isn’t noteworthy itself, it does have significant implications.
AI must consume human intelligence and content to progress, so what happens when they get stuck in a feedback loop? Minor mistakes grow over time. For instance, large language models are already error-prone, so subsequent training could become incredibly flawed over time.
What are the ethics of training AI with AI?
Training AI with AI poses ethical implications, most regarding output accuracy. For instance, bias during the training process results in flawed conclusions, which is antithetical to the entire purpose of leveraging AI. Engineers receive fewer impartial, data-driven results when inaccuracies amplify over time.
Researchers from Cornell University identified a concerning issue with generative technology, specifically an amplification of defects that occurs when it continually trains on other algorithms. They refer to the process as “model collapse,” stating AI will rapidly forget its initial input, leading to data contamination.
For example, an AI could train on images — 90% skin cancer and 10% birthmarks — for diagnostic purposes. It may initially confuse some birthmarks as signs of cancer since they’re less present in the data set. Other models would reproduce this minor misinterpretation when learning from it, resulting in a complete loss of birthmark identification.
Although the ethical implications of this scenario seem vague, the reality is that training AI with AI could ultimately produce misdiagnoses, immoral advice, incorrect information or fake references. Since the practice is significantly valuable for developing neural networks, it may be challenging to identify the source of inaccuracies.
Does AI training impact output quality?
Most experts agree that repetitively training AI with AI will impact output quality since it distorts the original data set. Models operate poorly without fresh data because they can only come to conclusions using misinterpretations or altered information.
Scientists from Rice and Stanford University named this decline in quality “Model Autography Disorder,” explaining how repeatedly training new models on synthetic or stale data creates a degenerative feedback loop over time. They allege future iterations of AI will only produce low-quality results.
Why train AI with AI?
Although ethical and quality limitations make training AI with AI seem unfeasible, it’s still a relatively promising venture. It makes the technology much more accessible, speeds testing, and unlocks technical potential.
Most adverse effects and output issues stem from repetitive, generational training, so to speak. Most things will become unrecognizable or unusable once they go through dozens of iterations. The process is similar to evolution in that nothing can stay unaffected over time.
Since AI becomes unstable or produces poor-quality work after learning from a combination of other models’ interpretations, the solution is clear — engineering teams must only use the original for training purposes.
The Final Word
Training AI with AI unlocks a new level of efficiency for an already unparalleled technology. Although models amplify misinterpretations over time if each interaction trains another, engineers could easily overcome this issue with the right approach. With careful validation and testing, this practice could be innovative.

By Devin Partida is the Editor-in-Chief of ReHack.com
Be the first to comment