There has been a huge influx of "AI (enter politician name)" content I've been seeing since on my personal channel, I follow politics fairly closely, so this setting makes sense.
The question I have is how might this extend to other AI use cases in the future, like AI assets in thumbnails? On the one hand, I don't want to have to disclose when I make a Leon S. Kennedy holding a cat for a thumbnail because I don't want my video to get added to that same type of list, but at the same time I REALLY dislike super dishonest AI thumbs.
For instance, a video came out a few months ago with a really great looking thumbnail showing a sea of zombies with one zombie in riot gear in the center and the title of the video was "this zombie apocalypse game is ABSOLUTELY TERRIFYING." I thought oh, I've not seen this before, I'm curious. It was Resident Evil 2, nearly five years after its release. No commentary and it appeared to just be footage of the free demo.
I was genuinely furious. If I don't like a video, I just stop watching it, but I gave that one a dislike and blocked the channel.
So the question is, how to you feel about AI accountability in general, and in the more specific use cases, like deceptive thumbnails designed to get clicks on completely insubstantial content?
FYI, I understand it's ad revenue and as long as this deceptive YouTube channel keeps getting the clicks for that initial ad before being found out as trash by the viewer, YouTube won't be penalizing them, so you don't have to go into that explanation.
Created at . Page last updated at .