Since I’m no longer making a living out of writing, I don’t really care if anything I write is dismissed as AI Slop because I sometimes use em dashes. (Actually, I probably use them more now I’ve seen so many idiots write that the use of em dashes is a surefire indicator that a piece was created — I nearly said written, which never seems quite accurate — by LLM.)
Still, today I saw a post by a site that offers verification of human-written text.You pay your fee to have your text scanned and scored as to the likelihood of its being totally or mainly human-written according to a traffic light system, then you choose whether to pay a little more for certification. What could go wrong?
Well, leaving aside the fact that the scanning fee quoted is a sizeable percentage per word of the rate many freelance writers are paid, I wonder what ‘scanning’ means? Careful examination of style and content (for instance by verifying references in academic papers) by some of those human beings who claim to be able to spot AI Slop so easily? Or scanning by algorithms reacting to the same AI-identification prompts recommended by that same group of prescient humans? If I were paying nearly $0.01 per word for the service, I think I’d really want to know. But when so many self-proclaimed experts claim to discard a submission at the drop of a bullet point or emoji, it’s not surprising if hard-pressed writers value a certification for its saleability rather than for its accuracy. Still, I can’t help being reminded of those Facebook algorithms that assess adherence to community standards. You know, the ones that in case of dispute are checked by other algorithms…
It didn’t surprise me to find that there are a number of companies milking this particular cash cow, and I’m sure there will be more soon. There are even sites offering a no-fee certification level for ‘non-commercial use’, which is, no doubt, an attractive option for poorly-paid freelancers. However, the chances are that where no fee is required, the onus is on the certified party to prove, if necessary, that they don’t use AI, since the certifying authority doesn’t actually detect that AI (or no more than 10% AI) is being used in their content. If I were a cynical person, I might wonder if all sites that do claim to scan for AI activity actually do so at all, let alone reliably. Oh, wait a minute, I am a cynical person.
In fact, several of the sites I looked at that offer AI-free badges or certificates only require sample text for examination, rather than an ongoing assessment of all text to be certified, so it’s as easy for a dishonest writer to game such a system as it is for a site where the only verification process is along the lines of:
“Do you use only 10% or less of AI generation?”
Yes.
“Pass, friend…”
And I’m not even looking at the issue caused by the proliferation of certifications, none of which seem to adhere to an authoritative external standard. Or the question of how easy it is to counterfeit a certification, or simply invent one of your own.
There is a need in many contexts of some authoritative way of identifying artificially generated content in order to properly assess its value, its accuracy, its usefulness. However, what I’m seeing right now is a huddle of fuzzy attempts by individual sites to stake a claim, without transparency or accountability.