by Thomas Kies
I see an awful lot of chatter about an Artificial Intelligence app called Chat GPT. Wildly popular, ChatGPT debuted last November and is free for users. It generates “sophisticated, human-like responses based on requests from users and mountains of data.” The app can be used for writing everything from emails to essays to coding.
It’s also raising concerns for universities…and publishers…who are rushing to include this in their policies concerning plagiarism.
In a column posted by a local real estate broker/owner here on the North Carolina coast, she quoted a CNN report by Samantha Murphy Kelly, business reporter. “Real estate agents say they can’t imagine working without ChatGPT now.” The local broker said in her column that real estate agents are using it as a time saver to write emails to clients, especially for repetitive questions and inquiries but also for property descriptions.
That seems benign enough.
Some law professors at the University of Minnesota used the chatbot to generate answers to exams in four law courses, then graded them blindly alongside actual students’ tests. ChatGPT averaged a C+ performance but fell below the humans’ B+ average.
Okay…but it still passed what amounts to a bar exam!
According to a Reuters report, if applied across the curriculum, that would be enough to earn the chatbot a law degree, though it would be placed on academic probation in Minnesota, ranked as the 21st best law school in the country by US News & World Report.
The chatbot could have earned a law degree!
What about in the field of fiction? Clarkesworld Magazine, a Hugo Award winning publisher of science fiction short stories has closed itself to submissions after being inundated with Artificial Intelligence generated pitches that overwhelmed its editorial staff.
In a typical month, the magazine receives a dozen or so short story submission that were suspected of plagiarizing other authors. But since late last year when ChatGPT was released, they’ve seen that rate go way up. The founding editor, Neil Clarke, said that this past January they rejected one hundred submissions, banning the “authors” from submitting again. Then in February, they banned five hundred more.
Clarke said, “I’ve reached out to several editors and the situation I’m experiencing is by no means unique.” He also said, “It’s clear that business as usual won’t be sustainable and I worry that this path will lead to an increased number of barriers for new and international authors.”
Intrigued and a little frightened, I Googled “Can ChatGBT write mysteries?”
A couple of blog posts popped up. One was from a blogger who had asked the chatbot to write a Sherlock Holmes mystery. The writer points out that Sherlock Holmes is in the public domain so anyone can try their hand at getting a Holmes mystery into print.
The chatbot insisted that it could write a Holmes tome in a thousand words or less. It did so in a little over 800 words. I read it. If he were alive, Sir Arthur Conan Doyle wouldn’t have anything to worry about. It was rife with plot holes.
Then another blogger posted about how he had asked the app to write a mystery and once again, it wrote something in about a thousand words.
Maybe that’s as long as it thinks a mystery should be. I’ve never used ChatGPT, nor have an inclination to do so, so perhaps the user mandated the word count.
Once again, the story was filled with plot holes and worse…cliches.
It literally started out with “It was a dark and stormy night.”
So, as a mystery writer, should I be looking over my shoulder for robots wielding a pen? Yes. I think all authors should.
AI will only get better with time.
Very interesting blog post. Thanks.
ReplyDelete