Actor Tom Hanks recently issued a statement warning people not to be fooled by ads with images of him that promote miracle cures and wonder drugs. They are all fake; his image has been used without permission.
He is just one of many celebrities whose fake endorsements for products and political candidates have recently been generated across social media platforms. This activity reminds me of my early days of teaching when I warned students not to believe everything they read and to check their sources. These days, it’s “Don’t believe everything you read, view, see, or hear.” Check the source because it has become commonplace to disseminate disinformation.
Big tech companies are developing deep fake detection tools. Algorithms can analyze images to check for characteristics that make them authentic. Examples include scanning the face to see if it was superimposed on another body and examining background objects to see if they fit within the context of the image.
Unfortunately, most of us do not have access to these tools at home using our digital devices. So how can we detect the fakes on our own? Here are some possibilities.
Check out the News Literacy Project (newslit.org). Their mission is to advance the development and teaching of news literacy in K-12 education. Their vision is that all students are skilled in news literacy before they graduate high school, giving them the knowledge and ability to participate in civic society as well-informed, critical thinkers.
They have developed a tracking tool for misinformation in the 2024 election. Information is organized into themes:
•Candidates image (247)
•Candidates popularity (105)
•Conspiracy (93)
•Platform & policy (81)
•Election integrity (79)
The numbers in parentheses represent the number of examples they have found. The types of misinformation include manipulative, fabricated, or context-trick information. Take a look and see the kinds of fake information being generated. The more examples we see identified, the more likely we will become better at distinguishing misinformation on our own.
Other suggestions come from Emeritus, a blog about artificial intelligence and machine learning (emeritus.org/blog/how-to-identify-ai-generated-content/). Look for phrases repeated multiple times within the same piece and information readily available anywhere on the web. Remember, AI gets its information from searching the internet. Also, another red flag is flawless grammar and spelling. Many legitimate pieces will have errors that a grammarian would circle with a red pen.
The public library in Albuquerque has an extensive list of things to look at when trying to identify a fake (abqlibrary.org/FakeNews/SocialMedia). Things to look for include:
•Is it a Facebook meme? Not the best source for reliability.
•Is it from a nonpartisan group like Politifact (politifact.com) or Snopes (snopes.com). These sites tend to be more reliable.
•Is there more than one news report of the incident? If we can only find an image or a story in just one place, chances are it’s fake.
•Does the website end in lo or com.co? These additions to a URL often indicate a fake source. We also need to scrutinize URLs in social media, our emails, and texts. Often a glance at the URL seems legitimate, however, a closer look may identify an irregularity. Don’t believe the content or open a link.
Here’s a website that gives examples of deep fakes and how they were identified. Minor details in an image like asymmetrical buckles and poles that have no purpose are giveaways of a manipulated photo (arxiv.org/pdf/2406.08651). Perhaps we need to get out our magnifying glasses (camera app) and become detectives.
Voiceovers can also be fakes. Two characteristics of AI voices are background noise and speech patterns. Too much fuzziness in the background like static is a indicator of a fake voice. Also, we need to pay attention to speech patterns. Does the emotion coming through seem appropriate to the message? Perhaps, you detect a flat monotone delivery? Both are possible red flags of a fake voiceover. Lastly, remember AI voices can be trained to mimic real people. The software processes multiple audio clips of a person’s voice to create fake messaging that is difficult to distinguish from the subject’s real voice.
During this election cycle, it’s not just the audio-visual and online material we need to scrutinize. When talking to a candidate, let’s not be passive listeners. Probe and clarify what they are advocating. Ask the person for the source of their information, and then double-check them after the conversation is finished. We may find that their representation of a bill or law is not quite accurate.
Now more than ever, we all need to become more vigilant in appraising our sources of information.
BoomerTECH Adventures (boomertechadventures.com) helps boomers and older adults navigate the digital world with confidence and competence. Active boomers themselves, they use their backgrounds as teachers to support individuals and groups with online courses, articles, videos and presentations to organizations upon request.
Send questions/comments to the editors.
We invite you to add your comments. We encourage a thoughtful exchange of ideas and information on this website. By joining the conversation, you are agreeing to our commenting policy and terms of use. More information is found on our FAQs. You can modify your screen name here.
Comments are managed by our staff during regular business hours Monday through Friday as well as limited hours on Saturday and Sunday. Comments held for moderation outside of those hours may take longer to approve.
Join the Conversation
Please sign into your Press Herald account to participate in conversations below. If you do not have an account, you can register or subscribe. Questions? Please see our FAQs.