On Jan. 7, Meta announced it was changing its third-party fact-checking policy to user-written community notes on its social media platforms.
In the age of misinformation and disinformation, fact-checking is incredibly important, and Meta ending their third-party fact-checking, where they would partner with independent American journalists to identify false information on their platforms, is disappointing.
President Donald Trump has been known to make untruthful claims about issues such as the Jan. 6 attacks and the election results. Trump has been fact-checked numerous times by the media, such as during his debate with former Vice-President Kamala Harris and even during his inaugural address. He also wasn’t truthful during his inauguration and even now in the White House.
At the Northern Star, we have an extensive fact-checking policy. Everyone who edits an article before publication (section editors, the managing editor, the editor-in-chief and copy editors) ensures that the information presented is 100% accurate in order to uphold our policy of accurate reporting.
David Gunkel, a professor in the communication department, explained Meta’s new fact-checking policy should be of great concern to everyone.
“We’re losing a tool in our set of tools for controlling social media content and the way that content could be perceived and used by individuals on these platforms,” Gunkel said. “So I don’t think fact-checking is the singular answer here but losing that as a strategy does diminish the capabilities of making use of social media for access to accurate information.”
With artificial intelligence becoming more and more prevalent, fact-checking to make sure news is accurate and real is even more important. AI platforms like ChatGPT and CoPilot can make up facts, so doing research to make sure the news you are consuming is factual is even more important in today’s society.
Gunkel said that with the prevalence of AI, social media users may struggle with determining what headlines or images are real versus AI generated.
“With regards to AI content, whether it be textual content or images or video, it is becoming increasingly difficult to distinguish the AI-generated content from fully human-generated content. And that’s because the large language models and the fusion models that do images have gotten much better at creating human-seeming content without any sort of direct human involvement in the creation of that content,” Gunkel said. “And you can see this in the images that are floating around online, some of the videos that have been created, a lot of the deep fake audio that is out there. We are living at a time when the ability to create these technologies, the ability to create this content, is getting easier because of the technologies and their capabilities.”
Adrienne Chinski, a sophomore psychology major, is worried people who run social media platforms through Meta are biased.
“It’s concerning that there’s not really, as far as I know, any guidelines of what is allowed and what’s not. And we’ve already seen that the people in charge of these social media platforms are incredibly biased, so I think we can predict what kinds of information they’re going to allow to stay up and what kinds they’re not,” Chinski said.
Mark Zuckerberg, the co-founder and CEO of Meta platforms is friends with Trump and was at the inauguration.
“I think it’s becoming more obvious that a lot of the people who run these social media companies are aligned with Trump and even working directly with him,” Chinski said. “So, it’s definitely concerning that we’ve seen a decrease in fact checking right when he’s being inaugurated because it’s concerning to think about how these social media platforms are going to cater to him and his interests.”
In an age of social media and inaccurate reporting, it’s important that people be skeptical of everything they see on social media. By fact-checking news, society can be better informed and educated.