Serving the Big Horn Basin for over 100 years
“Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.” -- Jurassic Park’s Ian Malcolm.
In Jurassic Park Jeff Goldblum’s character stated the above in reference to re-creating dinosaurs but lately the statement can ring true regarding artificial intelligence. On Monday, the so-called “Godfather of AI” Geoffrey Hinton quit Google and, according to several media reports, stated he regretted the work he has done because of how AI could be misused or abused. He wanted to be able to speak out about the dangers of the technology and did not feel it was right to do that while working for Google.
My first thought was, wow, was he really that naïve to think that unscrupulous and immoral people would not use good things for ill intent. And, let’s be honest there are already reports of misuse.
OpenAI Chief Technology Officer Mira Murati is quoted on OpenAI website, “AI systems are becoming a part of everyday life. The key is to ensure that these machines are aligned with human intentions and values.” (OpenAI is the creator of one of the latest AI products ChatGPT.)
That all sounds great but my intentions and my values can differ greatly from my neighbor or from someone in another city, state or country. To make a statement that the AI machines are aligned with human intentions is to believe we all have good, honest intentions and that is not the case.
In a world and country so divided about nearly everything, what values and intentions are AI creators instilling in their programs? What intentions and values do the users have? Are they the same as the creators? Maybe yes, maybe no? That is where some of the danger lies.
Until recently I had not considered AI implications in the newsroom. Several weeks ago my husband asked if I had considered AI implications and I had not and truly did not give it much more thought, until our operations director emailed the publishers and managers about ChatGPT and he included a link to a column from the Gateway Journalism Review on how AI and ChatGPT can impact journalism. The column was written by Nick Kratsas, the digital media manager for West Virginia University Student Media.
Kratsas did his own research and found good and bad.The good, as I saw it, was the story ideas that could be generated. The bad, as he noted, “I gave it a prompt that our university president announced that enrollment was down 20% and that this would result in a 15% cut in employees. ChatGPT created an article talking about how the university would pull through this time and focus on student enrichment, and even made up quotes from the president.”
Did you read that? AI “made up” quotes from the president.
Kratsas wrote, “I could see this could be a great tool for generating ideas and doing research, but I was also worried about a student turning in an article that ChatGPT wrote.” But, what about professionals? We already know seasoned photographers have manipulated photos, so why not let AI write a story for you?
The more research I have done, I realize I have similar concerns as Kratsas. Not with our current staff here at the Northern Wyoming News, but for newspapers and news media in general. More and more newspapers are outsourcing reporters, designers and more. They are looking for cost-cutting methods and time efficiency, but I do not want to shortcut the information to our readers.
I have not given ChatGPT a run but our operations director did, with a column regarding potholes in Saratoga and a news story on emotional impacts from active shooters.
Reading those samples I found that the pieces were well written, but with no real substance. There was no specifics about what the city was doing regarding potholes, the struggles they had in addressing the issue, it was all general terms. This is the same feeling I had from the news story generated by AI. No real substance. Community journalism needs to make things personal and real for the readers, otherwise they can just go to any website or generate their own story from ChatPGT.
As journalists we need to be giving people personal and real information.
When I read both samples I thought they could be an outline of sorts, giving a writer something to work with; however some writers, whether it be for a newspaper or perhaps a college term paper, would just take the AI story and run with it as original work.
One of the concerns listed by Kratsas and our operations director is that there are no sources noted in AI-generated stories, no way for a proofreader to factcheck information by going to a reporter’s notes or original sources.
I think if AI creators like Hinton are concerned with the direction and future of AI technology then perhaps you and I should be concerned as well.
AI is in our lives and will continue to be. It is the 21st century and just as “life found a way” for the dinosaurs to continue to exist in the Jurassic Park/World franchise, AI and technology will continue to exist and advance.
The question we need to ask ourselves then is this: do we rely on the AI companies to try and create safety protocols into their programming or do we think for ourselves and consider how we use AI products and create our own protocols?
Kratsas wrote, “With no policies in place for such a scenario, I decided to bring my concerns to my student media director. We both agreed that it was unlikely one of our students would turn in an AI written article, but we should probably have something in our handbook, just in case.”
This is something every newsroom, including ours, needs to consider.
We must not let AI do our thinking for us, we must decide how much influence we allow AI to have over our lives.
I am all for technology that can better our lives, but let’s not take the human element out of our lives, especially out of our writing.
-Karla Pomeroy