Submissions (11)

Limal's avatar
Limal5/21/2024

Humans have a unique (at a moment of writing hehe) ability to detect tiny nuances in mimicry and emotions that are incredibly difficult for AI to replicate convincingly. By harnessing this natural skill, we can develop strategies to combat harmful deepfakes. Here are few ideas how we could leverage this!

  1. Crowdsourced Verification Platforms: Create platforms where users can submit suspected deepfakes for verification. Leverage a community of trained volunteers and professionals who can analyze videos for subtle discrepancies in facial expressions and emotional cues. Encourage public participation by gamifying the process with rewards for accurate detections. Or even host competitions and challenges that invite participants to create and detect deepfakes, helping to refine detection methods and raise awareness.

  2. Training Programs for Enhanced Detection: Develop training programs to educate people on how to spot deepfakes by focusing on common inconsistencies, such as unnatural eye movements, subtle changes in lighting, or mismatched emotions. Incorporate these programs into schools, workplaces, and online courses to build widespread awareness.

  3. Public Awareness Campaigns: Launch campaigns to raise awareness about the existence and dangers of deepfakes, emphasizing the importance of critical thinking and skepticism when consuming media. Highlight real-world examples where human intuition has successfully identified deepfakes, reinforcing trust in this natural ability.

  4. Strengthening Legal Frameworks: Advocate for stronger laws and regulations that penalize the malicious creation and distribution of deepfakes. Work with policymakers to ensure that legal measures support and enhance human-driven detection efforts. Include consequences for sharing unverified information, so people will share only things they are sure.

I’m really worried about elders, as they are risk group since they get used to believe what they see, and do not poses modern tech on a good level to verify the content.

CelestialFlea's avatar
CelestialFlea5/21/2024

$3

I think Sturmer covered a lot of the main points pretty well and I'm not sure how much I can improve on it, but I'll try.

  • If it's too good to be true it probably is.

I know you've heard it all before yet a lot of people seem to forget this old saying and make conclusions based on what they see at face value and don't look any deeper. If a public figure/celeb is messaging you and offering you something. STOP, THINK. Why would they message you out of the blue, why would they endorse some random crypto currency? Why would they ask you for money?

Verify the account is authentic, check the account history, their tweets, posts etc. Is there a sudden change in what they usually post? Or is it a pretty new account with stolen images? If it's the later, congrats you avoided being scammed using common sense. If the account seems genuine but the behavior is strange it's possible their account was compromised. As Sturmer said, QUESTION EVERYTHING.

  • Look for imperfections.

As someone who has experience working with generative AI it becomes relatively simple to spot fakes when you know what to look for and look at images a little closer. When it comes to AI generated people for example; expressions are one thing AI seems to struggle with at the moment so a lot of the time, expressions are gonna be pretty neutral and void of life, like they were posing for a drivers license or mugshot.

You can also look for imperfections in the image itself, for example the lighting might be wrong, shadows in the wrong places, mismatched textures or textures that aren't aligned properly etc. Other tell tale signs to look for are imperfections in the person themselves, wrong color eyes or different colors, missing or extra fingers, disproportionate eye shapes etc. AI often struggles with intricate details and textures, ESPECIALLY text/logos.

Compare recent photos of the person in question, look for things that are out of place like I suggested.

  • AI Voices have flaws

Another growing problem with deepfakes is how realistic AI generated voices are starting to get, the untrained ear might not be able to tell the difference. But like with images, there are things to watch out for.

One of the biggest giveaways with many AI voices, is that they can be very monotone. Often there will be very little change to the sound and pitch of the persons voice. Listen carefully to how they start and end sentences, how the tone of the voice only fluctuates within a certain range and rarely changes from that. Listen to the pronunciation of words, quite often AI will get it wrong or struggle to give it the right tone, the volume is always exactly the same too it'll just sound weird.

  • CONCLUSION

Whilst I suppose it could be possible to have some sort of digital marker on content, similarly to how blockchain items work to verify authenticity it seems like a lot of trouble and expense when in most cases, Common sense and a little knowledge can be used to combat most deepfakes.

Kane Carnifex's avatar
Kane Carnifex5/21/2024

You would need to split this into specific topics.

Stuff which affects you

I would make this super short. If you are aware of Phishing and able to protect yourself. You are fine, there is nothing you need to do for the future.

If you are unable to understand Phishing, well how did you survive until 2024? People who post their Credit-Card on Twitter are another level.

Stuff which doesn´t affect you.

Fake News, Photoshop and people who are famous.

They have no effect on your life, so why should I need protection? Fake Politicians, I only can vote for a political party. What the political party does isn't in my hands anymore. Next time I can vote differently.  I want to see results not words as smoke.

Fake News? Did you buy Toilet Paper because of the shortcut (German thingi due corona) ?  WHY no japanese toilet? If you still have none, we are not the same!

NOBODY IN THE WORLD WOULD WIPE SHIT FROM THEIR FACE WITH PAPER. BUT FOR YOUR ASS IT IS ENOUGH. Don´t talk to me if you still live like a Neanderthals,

---- Additional Content--- Because sometimes i need to speak up.

Ok, I read through the Submissions and I am a little bit lost. Most of you are stuck with the problem that any kind of deepfake already has a negative effect on you. I doubt this!

Watermark/Copyright/Digital signature

If you still don´t see a deep fake now? How will you ever be able to learn it? Also it fails to lead you into the conclusion that if it's not marked as Deep Fake it must be true.

Illegal / Restrict Access to it

The people who are used in a bad way will be used in a bad way. Doesn 't matter if you make it illegal or difficult to access. Only because you have limited skills on the Internet are born here.

Critical Thinking ^^ this was the Best point I saw and also one of the first submissions.

As said if you see social media and also just about as entertainment. Nothing can harm you. But if you think Youtube is real or creator or smart people… they forget to do TAX. They don´t know a lot of basic life stuff…

FirestormGamingTeam's avatar
FirestormGamingTeam5/21/2024

$3

This is a disturbing trend that is emerging and makes it very hard for those of us who are actually fans of people to tell what's real and what's not.

IMO "deep fakes" should be treated the same as identity theft in all countries, I know this sounds harsh but at the end of the day they are impersonating very famous people who could have careers, and families affected by what they are doing.

It should be labelled as illegal, if I found a video of myself online say, being massively racist etc, it could destroy any chance I would ever have at being "big" as a YouTuber as the internet never forgets.

Deep fakes should be classed as illegal. This is just my take on it.

Makster's avatar
Makster5/20/2024

$3

I think Sturmer has the right idea and taken a lot of though into their response. As the post states deepfakes have been around for a long time - we've seen photoshop and altered images since the dawn of the internet so there is a responsibility of the users to utilise critical thinking and research before jumping to conclusions or mass sharing of fake images/ news. I think it will extend to public service education. When I grew up, internet education was about not revealing your public details on the internet which is quite outdated nowadays (though you can argue there is still a reason for screen names and Doxxing). I think education is really important going forwards in the age of Deep Fakes.

I'm hardpressed to think it is the responsibility of the software developer to control everything a user produces. Just as there are T&C in software agreements these will be broken (or more often not read) so how are developers going to enforce what is produced? I find it is a restriction on creativity if developers controlled what users can generate.

However as stated in the comments - copyright notices or watermarks built in will help identify deepfakes.

My recommendation would be for higher barrier to entry for using this technology so making it less available or accessible for the public to use. This can also extend to increasing the price or higher DRM security. If this is for commercial use only and for professional companies hopefully it'll only be used for legitimate means.