Please ensure Javascript is enabled for purposes of website accessibility
Skip to main content


Opinion

Daniel’s Column

| Daniel Evans
I’m in a class that was recently tasked with watching two side by side 15-second videos of a woman speaking and determining which was real and which was AI generated.

Of about 25 people in the room, about 50% were able to discern which video was real. (For anyone wondering, I got it right.)

The AI generated video was created through a 10-minute recording, where the speaker talked in front of her phone camera. Then, she told an AI program to produce a video saying words she programmed, and the computer was able to nearly perfectly match her movements and her voice.

That’s all it took to fool half of the people in this class, and these are very smart people. Imagine that.

Do you think you’d fall for it?

Or do you think you’d be able to tell if something was written by artificial intelligence?

There’s only one way to find out. Which one of the following paragraphs was written by me, and which was written by AI?

Paragraph 1: The fear of the future of AI really scares me because so many people are easily fooled by it. Used to, when you showed someone a picture or a video of something that happened, they could know with 100% certainty that it depicted a real incident. Now, you have to wonder if the video was doctored or made up entirely.

Paragraph 2: I find the future of AI genuinely unsettling because it’s becoming so easy for people to be deceived by it. There was a time when showing someone a photo or video was enough to prove something really happened. But now, you always have to question whether the footage has been altered or is entirely fabricated.

Paragraph 1 was written by me, and paragraph 2 was written by ChatGPT.

I’ll let you decide if the computer wrote it better than I did.

Let’s do one more, just for fun.

Paragraph 1: Journalists all over are concerned about what AI could mean for our field. Some companies have started using AI to quickly generate sports stories from high school game stats, and a few even let AI produce news content. But not us — we’re all human here, and we’re grateful to have Brian Lester and Jackson Buhler on our sports team.

Paragraph 2: Journalists everywhere are worried about the future of AI, and what it might mean for our profession. Many companies are now using a form of AI to quickly write sports stories from high school games based on a box score. Some have even branched into allowing AI to write news content. We are not. We are human across the board and thankful to have Brian Lester and Jackson Buhler in our sports department.

I wrote paragraph 2, and ChatGPT wrote paragraph 1.

It’s amazing what computers are able to do in 2024. It’s easier than ever to connect with other people and to do research papers.

But it’s scary to think about how much misinformation is out there, and how quickly someone can distort something to make it look believable. I’ve seen people fooled by the dumbest fake news stories online, to the point of it being dumbfounding that they believed it was true.

In turn, “fake news” makes the job of real journalists even tougher. And I mean actual fake news, not news that people call fake because they disagree with it politically.

Don’t be fooled. If something looks too unbelievable to be true, don’t re-post it just because it makes fun of your favorite political candidate. Or if you do, at least put a disclaimer that it’s not real, but you thought it was funny.

There are people out there who truly believe everything they read. And in 2024, that’s not very smart.

 

 

 

error: Content is protected !!