Artificial Intelligence text is like a cantaloupe purchased out-of-season.
It looks like a melon, but it has no taste.
People still buy those melons. They are easy and they are good enough, if availability is more important than quality.
AI text creates an outline for a blog post, and then fills in that outline. Using the key words one enters to instruct what is wanted, the program creates a two or three sentence introduction which restates those key words inserting them into full sentences. Then it inserts a few paragraphs, with each devoted to one of the key word concepts. It concludes with a paragraph that repeats what has just been said.
Part of the genius of AI text is that it uses different words to say the same thing, which obscures the fact that it is filling space. In the instance of my trial experiment with a post on Social Security, the AI text repeated at least 10 times within a 461-word essay that seniors count on their Social Security check. It is true. Most do. The Social Security example cited some readily available statistics, including that 65 million people get the benefit and that it is 35% of the average senior's income, which gives some data heft to the argument. The text argues that seniors like spending, that merchants like having it spent, and that governments like merchants who stay in business. All true. All so reasonable. All dead obvious.
The AI text displays its lack of "understanding" of why Social Security has persisted for eight decades amid all the political turmoil over taxation and income re-distribution. It is as if the AI text did a quick scan of web commentary and synthesized it--which is, of course, what it did. In that sense it is all too human, rather like someone passing along rumors gleaned from friends, Joe Rogan, or an email chain letter. In the paragraph on fairness, it reports that "wealthy Americans do not pay into the system at all once their taxable earnings exceed $132,900 per year." That is true. That argument misses a key point, though, for an essay on the political risk to Social Security when weighed against budget deficits. Social Security is set up as an earned benefit. Like most pensions, the benefit is approximately a return, with interest, of what was paid in by oneself and one's employer over one's working life. The Social Security benefit is capped to pay out on earnings up to the maximum at $132,900, so the payment into Social Security is capped at the same level. In fact, the formula for payouts skew toward providing a better benefit for lower income people, but that is disguised. Social Security isn't "welfare" nor do people understand it as such. Social Security is arguably a fair return across all income levels, given that on average prosperous people live longer. It gets political buy-in across all groups. If it became a transparent transfer of money, Social Security would become an unearned benefit, and promptly become as contentious as welfare or food stamps.
It would be dangerous for a writer to put his name on unedited, unchecked AI text. It asserts with confidence things the program doesn't understand. Besides, it is boring and repetitious. It reads like the bullet points in an old-style Power Point presentations.
But AI text, even at its current state of development, still has value to a writer. It fills a block of space with words that sound reasonable and are grammatically correct. If readers are just skimming anyway, it looks as if the writer addressed the topic. Quick-and-dirty readers get quick-and-dirty text, and nobody is the worse. The text is meant to be read the way a cantaloupe purchased January is meant to be eaten. Both fill space. They aren't good, but they are good enough.
I asked ChatGPT what might have been an embarrassing question:
Me: “What will happen when all the content on the Internet that ChatGPT searches is written by other robots?”
1st Try
ChatGPT: “If all the content on the internet that ChatGPT searches is written by other robots, it could have a significant impact on the performance and accuracy of the model.
Since ChatGPT is trained on a vast amount of text data from the internet, if all the text data it is exposed to is generated by other robots…” After a long pause: “Error”
2nd Try
ChatGPT: “If all the content on the Internet that ChatGPT searches is written by other robots, it will likely still be able to generate human-like text in response to prompts. However, the quality and accuracy of the generated text may be affected by the quality and accuracy of the input data.
If the robot-generated content is of high quality and accurately represents human language and thought, then ChatGPT's output may be similarly high quality. However, if the robot-generated content is flawed or doesn't accurately represent human language and thought, then ChatGPT's output may be similarly flawed.
Additionally, as the robot-generated content will probably be less diverse, ChatGPT may generate less diverse outputs. For example, if all the input data is written in the same style, format or language, ChatGPT will have a harder time to generate a variety of outputs.
Overall, the performance of ChatGPT will depend on the quality of the input data and how well it represents human language and thought.”