community background

Tech

Tech
Sturmer's avatar

You are completely right; modern deep fakes are incredibly well-made. I generally have three strategies to counter them:

  1. Critical Thinking Don’t believe everything you see; question everything. If you come across a video of a well-known celebrity doing or saying something strange, ask yourself why they would do that. Consider their motivations and goals. If you actively follow someone, you probably know their views and beliefs. If what you’ve seen is completely out of character for them, the content is likely altered.

  2. Verify Sources Always check where the information came from. Is it an official channel? Is it a regulated media outlet, or a no-name "news" site referencing another no-name site?

  3. Avoid Snap Judgments Most of the time, the more disgusting or disturbing the content, the more likely it is taken out of context. Without the beginning and ending, you’re missing the full story. Don’t judge or make conclusions until you’ve seen the whole context.

Kane Carnifex's avatar

This good example for Stuff which hasn´t an affect on you. No harm for yourself, just Information for something which you CAN´t change.

Good points, but nothing new to me.

Shovel's avatar

this is a bit of a tricky one because with the rides of AI it’s becoming so difficult to separate what real and what’s fake. I think there needs to be some sort of detection tools that can recognise sort of inconsistencies or can recognise maybe an invisible watermark so any system that it’s put through can highlight that this is a generated piece of content. I know a lot of universities now they’ve got a detection for any essays that get sent in and a lot of websites have also got those on such as Instagram that you can detect if it’s AI generated but I think they could be more work that can be done in terms of detection

Kane Carnifex's avatar

Hmm, on which topics do you want to know its fake or not fake?

Like social media i wouldn´t care, this content is for Entertainment not for learning :P Political Stuff in 2024 is kind of MAD TV anyway, more or less fake doesn´t affect.

A full Deepfake Movie/Game this goes all under Entertainment. And Entertainment never had the need to be real.

Paul's avatar

Make AI software illigal for the general public. Where organisations are given access to it, have its use regulated, secured and monitored in the same way peoples privet information is. By that I mean a company is obligated to regularly prove its use is justified, likenesses have signed agreements from said person and the software as well as all files created using it are secure.

Shovel's avatar

all it takes is for 1 corrupt company (like the government) to get their hand on it thought. You gotta remember that the dark web originated as a government programme, now look at it.

Paul's avatar

Thats exactly why it should be regulated by a non government organisation like data protection. If people can be held accountable then they would ensure its kept legal. Its impossible to completely stop it being used for bad but that goes for lots of things. The important thing is that every day people would not have legal access to it and there would be records of its legal use incase its used unjustly.

Boomer's avatar

Was it a government programme? I've not seen anything related to that and I thought it started as an academic research project?

Shovel's avatar

Yea, the Dark web was basically made by the government to like “catch criminals” they said but to basically do business online without getting traced or hacked. Originally used by the United States Department of Defense to communicate anonymously And now look at it.

Kane Carnifex's avatar

They made hacking software ilegal in Germany. Now we can´t test our stuff because its ilegal to do.

This not how it works at all.

Stella's avatar

I think you can discern deepfakes from the eyes - they always seem a little off and the movement of them seems quite artificial.

Also, there can be a degree of critical thinking that can help .. researching stuff from multiple sources etc

Sinclair's avatar

i think deepfakes are dangerous, i've read many articles about it and yeah its too dangerous i mean, using AI for helping humanity is good, but deepfakes? its too dangerous, like spreading mis-information, ect

i think we have to limit the AI use, especially when its given "too much" information that can be used to create a fake information or even worse, Deepfakes. Also it has to be Regulated HEAVILY by both Govt and tech company before its too late

Kane Carnifex's avatar

If you want to charge your iphone in the mircowave, there is nobody which can help you. These people already missed so many classes from life.

^^ This is the level of critical thinking people don´t have right now!!!

Let me double check you.

Does you country provide drink water in good quality? If so do you buy bottle water? If yes, you failed. Go read about Drinking Water Vs Bottle Water..

L

The tech that goes into deep fakes needs to be matched. there already exists detection algorithms but more needs to be done to clearly identify deepfakes not just in the harmful sense.

Stronger punishments for harmful deep fakes. public trial and prosecution of those creating harmful deep fakes

possibly more identity and verification systems put in place on social media although this may impact user experience.

education, there is already many people falling for obvious ai images and deep fakes. so educating people how to spot and report them.

Makster's avatar

I do wonder where the line and responsibility lies and whether tech companies should hold more responsibility. This topic is probably very hot right now as we are seeing Government trying to make big tech accountable for the negative effects of social media.

I would have thought that T&C when you sign up or agree to a licence would scrub the hands of tech companies of responsibility i.e. you broke our T&C therefore you are liable not ourselves. If we consider deepfake tech as a tool - we don't hold those that manufacturer of machetes or guns (the latter is more contentious) as responsible for attacks because it is a user responsibility. In a similar way, are the manufacturers responsible or the merchant that sells the tool (does the merchant have responsibility for adequate background checks). And finally as an extension - if you create a deepfake but only share it with a few friends which then gets leaked and causes harm, who is responsible? Yourself for creating it (this could be an inoffensive image only to be perverted by an offender but you hold the licence)

To me it does lie on the responsibility of the user and education (hopefully enforced + assisted by big tech). Deepfakes are media that is designed to be misinterpreted. And misinterpretation of media has been around for a very long time such as metal being the devil's music, or the Smurfs being anti-Christian. It isn't the creators intention for this but mass-interpretation perverts the idea.

Kane Carnifex's avatar

Therefore social Media is Entertainment :)

I like the comment with the gun, its the user which will missuse it.

Kane Carnifex's avatar

What is a harmful Deepfake? In which thematical context?

As said social media is for Entertainment not learning not other stuff.

If you Political is using Twitter they are Clowns.

Makster's avatar

I think Sturmer has the right idea and taken a lot of though into their response. As the post states deepfakes have been around for a long time - we've seen photoshop and altered images since the dawn of the internet so there is a responsibility of the users to utilise critical thinking and research before jumping to conclusions or mass sharing of fake images/ news. I think it will extend to public service education. When I grew up, internet education was about not revealing your public details on the internet which is quite outdated nowadays (though you can argue there is still a reason for screen names and Doxxing). I think education is really important going forwards in the age of Deep Fakes.

I'm hardpressed to think it is the responsibility of the software developer to control everything a user produces. Just as there are T&C in software agreements these will be broken (or more often not read) so how are developers going to enforce what is produced? I find it is a restriction on creativity if developers controlled what users can generate.

However as stated in the comments - copyright notices or watermarks built in will help identify deepfakes.

My recommendation would be for higher barrier to entry for using this technology so making it less available or accessible for the public to use. This can also extend to increasing the price or higher DRM security. If this is for commercial use only and for professional companies hopefully it'll only be used for legitimate means.

Kane Carnifex's avatar

So instead of educate people into basic stuff like critical thinking. You want to limit it and let the "dumb" people be dump but protected from charging their iphone in the mircowave?

The copyright,watermark thingi i read now serveral times but this can be broken.

Makster's avatar

? sorry I don't understand. I think my comment says:

I think education is really important going forwards in the age of Deep Fakes.

EveOnlineTutorials's avatar

This is a disturbing trend that is emerging and makes it very hard for those of us who are actually fans of people to tell what's real and what's not.

IMO "deep fakes" should be treated the same as identity theft in all countries, I know this sounds harsh but at the end of the day they are impersonating very famous people who could have careers, and families affected by what they are doing.

It should be labelled as illegal, if I found a video of myself online say, being massively racist etc, it could destroy any chance I would ever have at being "big" as a YouTuber as the internet never forgets.

Deep fakes should be classed as illegal. This is just my take on it.

Kane Carnifex's avatar

Puh, good point i found this complicated if somebody would fake myself as a racist on youtube.

On the other hand, i am not famous enough which i would care. So nobody would watch it.

But than again it would be "entertainment". I personally don´t have a base where i can tell the world anything.

I still think everything which happens on social media is entertainment in a good or bad way. But it never can / should/ interact with your Realife.

Kane Carnifex's avatar

You would need to split this into specific topics.

Stuff which affects you

I would make this super short. If you are aware of Phishing and able to protect yourself. You are fine, there is nothing you need to do for the future.

If you are unable to understand Phishing, well how did you survive until 2024? People who post their Credit-Card on Twitter are another level.

Stuff which doesn´t affect you.

Fake News, Photoshop and people who are famous.

They have no effect on your life, so why should I need protection? Fake Politicians, I only can vote for a political party. What the political party does isn't in my hands anymore. Next time I can vote differently.  I want to see results not words as smoke.

Fake News? Did you buy Toilet Paper because of the shortcut (German thingi due corona) ?  WHY no japanese toilet? If you still have none, we are not the same!

NOBODY IN THE WORLD WOULD WIPE SHIT FROM THEIR FACE WITH PAPER. BUT FOR YOUR ASS IT IS ENOUGH. Don´t talk to me if you still live like a Neanderthals,

---- Additional Content--- Because sometimes i need to speak up.

Ok, I read through the Submissions and I am a little bit lost. Most of you are stuck with the problem that any kind of deepfake already has a negative effect on you. I doubt this!

Watermark/Copyright/Digital signature

If you still don´t see a deep fake now? How will you ever be able to learn it? Also it fails to lead you into the conclusion that if it's not marked as Deep Fake it must be true.

Illegal / Restrict Access to it

The people who are used in a bad way will be used in a bad way. Doesn 't matter if you make it illegal or difficult to access. Only because you have limited skills on the Internet are born here.

Critical Thinking ^^ this was the Best point I saw and also one of the first submissions.

As said if you see social media and also just about as entertainment. Nothing can harm you. But if you think Youtube is real or creator or smart people… they forget to do TAX. They don´t know a lot of basic life stuff…

CelestialFlea's avatar

Sorry bud but I'm really not sure how most of this relates to the bounty itself, a lot of it ain't even related to deepfakes or combating it though I do agree that deepfakes can cover a wide range of stuff.

CelestialFlea's avatar

I think Sturmer covered a lot of the main points pretty well and I'm not sure how much I can improve on it, but I'll try.

  • If it's too good to be true it probably is.

I know you've heard it all before yet a lot of people seem to forget this old saying and make conclusions based on what they see at face value and don't look any deeper. If a public figure/celeb is messaging you and offering you something. STOP, THINK. Why would they message you out of the blue, why would they endorse some random crypto currency? Why would they ask you for money?

Verify the account is authentic, check the account history, their tweets, posts etc. Is there a sudden change in what they usually post? Or is it a pretty new account with stolen images? If it's the later, congrats you avoided being scammed using common sense. If the account seems genuine but the behavior is strange it's possible their account was compromised. As Sturmer said, QUESTION EVERYTHING.

  • Look for imperfections.

As someone who has experience working with generative AI it becomes relatively simple to spot fakes when you know what to look for and look at images a little closer. When it comes to AI generated people for example; expressions are one thing AI seems to struggle with at the moment so a lot of the time, expressions are gonna be pretty neutral and void of life, like they were posing for a drivers license or mugshot.

You can also look for imperfections in the image itself, for example the lighting might be wrong, shadows in the wrong places, mismatched textures or textures that aren't aligned properly etc. Other tell tale signs to look for are imperfections in the person themselves, wrong color eyes or different colors, missing or extra fingers, disproportionate eye shapes etc. AI often struggles with intricate details and textures, ESPECIALLY text/logos.

Compare recent photos of the person in question, look for things that are out of place like I suggested.

  • AI Voices have flaws

Another growing problem with deepfakes is how realistic AI generated voices are starting to get, the untrained ear might not be able to tell the difference. But like with images, there are things to watch out for.

One of the biggest giveaways with many AI voices, is that they can be very monotone. Often there will be very little change to the sound and pitch of the persons voice. Listen carefully to how they start and end sentences, how the tone of the voice only fluctuates within a certain range and rarely changes from that. Listen to the pronunciation of words, quite often AI will get it wrong or struggle to give it the right tone, the volume is always exactly the same too it'll just sound weird.

  • CONCLUSION

Whilst I suppose it could be possible to have some sort of digital marker on content, similarly to how blockchain items work to verify authenticity it seems like a lot of trouble and expense when in most cases, Common sense and a little knowledge can be used to combat most deepfakes.

Limal's avatar

Humans have a unique (at a moment of writing hehe) ability to detect tiny nuances in mimicry and emotions that are incredibly difficult for AI to replicate convincingly. By harnessing this natural skill, we can develop strategies to combat harmful deepfakes. Here are few ideas how we could leverage this!

  1. Crowdsourced Verification Platforms: Create platforms where users can submit suspected deepfakes for verification. Leverage a community of trained volunteers and professionals who can analyze videos for subtle discrepancies in facial expressions and emotional cues. Encourage public participation by gamifying the process with rewards for accurate detections. Or even host competitions and challenges that invite participants to create and detect deepfakes, helping to refine detection methods and raise awareness.

  2. Training Programs for Enhanced Detection: Develop training programs to educate people on how to spot deepfakes by focusing on common inconsistencies, such as unnatural eye movements, subtle changes in lighting, or mismatched emotions. Incorporate these programs into schools, workplaces, and online courses to build widespread awareness.

  3. Public Awareness Campaigns: Launch campaigns to raise awareness about the existence and dangers of deepfakes, emphasizing the importance of critical thinking and skepticism when consuming media. Highlight real-world examples where human intuition has successfully identified deepfakes, reinforcing trust in this natural ability.

  4. Strengthening Legal Frameworks: Advocate for stronger laws and regulations that penalize the malicious creation and distribution of deepfakes. Work with policymakers to ensure that legal measures support and enhance human-driven detection efforts. Include consequences for sharing unverified information, so people will share only things they are sure.

I’m really worried about elders, as they are risk group since they get used to believe what they see, and do not poses modern tech on a good level to verify the content.

Communities

There’s more to love

Help shape the future of our platform as we build the best place to express and enjoy your passions, whatever they may be.

Emoji

© Just About Community Ltd. 2024