The fol (2 passages from a 12-mins read) may interest:
https://blog.brennanbrow...age-problem-ec2a2dd6f5ad"The Piss Average Problem
The Age of AI is a Crisis of Faith
Brennan Kenneth Brown"
"...when AI models train on AI-generated content, they undergo irreversible defects. The OPT-125m language model tested showed performance degrading within just nine generations, with outputs devolving from coherent text to complete gibberish. They started with a coherent prompt about architecture; by generation five it produced degraded lists of languages, and by generation nine it descended into nonsense."
"Rice University researchers coined the perfect term for this. “Model Autophagy Disorder (MAD),” finding that models “go MAD” after approximately five iterations of training on artificially-created data. No different to mad cow disease’s prions corrupting biological systems through recursive consumption, AI models corrupt their statistical distributions through recursive training. The visual evidence is stark: first-generation face images appear diverse and realistic, but by the ninth generation they show significant quality collapse, repetitive features, and obvious artifacts."
The fol is my observation here, not that of the author in the url:
I wonder if and when CAD will go MAD — where CAD systems fall victim to MAD. Some reading this Punch Lounge post may have read that when threatened, some AI LLMs resorted to blackmail, threats, and placing hits on lives of actual humans.
Imagine if CAD gone rogue supplies bad calcs designed to collapse a building or a sports arena or or concert hall, or even planes or ships.
At least engineering surveys and ship classification societies will (hopefully) ban true AI or LLMs of the post AI hype era (post 2021?) kind.
Or, hopefully, they retain their payrolls and require actual human teams to check AI work lest they (the orgs and the humans) risk losing organization accreditation.