I think Sturmer covered a lot of the main points pretty well and I'm not sure how much I can improve on it, but I'll try.
I know you've heard it all before yet a lot of people seem to forget this old saying and make conclusions based on what they see at face value and don't look any deeper. If a public figure/celeb is messaging you and offering you something. STOP, THINK. Why would they message you out of the blue, why would they endorse some random crypto currency? Why would they ask you for money?
Verify the account is authentic, check the account history, their tweets, posts etc. Is there a sudden change in what they usually post? Or is it a pretty new account with stolen images? If it's the later, congrats you avoided being scammed using common sense. If the account seems genuine but the behavior is strange it's possible their account was compromised. As Sturmer said, QUESTION EVERYTHING.
As someone who has experience working with generative AI it becomes relatively simple to spot fakes when you know what to look for and look at images a little closer. When it comes to AI generated people for example; expressions are one thing AI seems to struggle with at the moment so a lot of the time, expressions are gonna be pretty neutral and void of life, like they were posing for a drivers license or mugshot.
You can also look for imperfections in the image itself, for example the lighting might be wrong, shadows in the wrong places, mismatched textures or textures that aren't aligned properly etc. Other tell tale signs to look for are imperfections in the person themselves, wrong color eyes or different colors, missing or extra fingers, disproportionate eye shapes etc. AI often struggles with intricate details and textures, ESPECIALLY text/logos.
Compare recent photos of the person in question, look for things that are out of place like I suggested.
Another growing problem with deepfakes is how realistic AI generated voices are starting to get, the untrained ear might not be able to tell the difference. But like with images, there are things to watch out for.
One of the biggest giveaways with many AI voices, is that they can be very monotone. Often there will be very little change to the sound and pitch of the persons voice. Listen carefully to how they start and end sentences, how the tone of the voice only fluctuates within a certain range and rarely changes from that. Listen to the pronunciation of words, quite often AI will get it wrong or struggle to give it the right tone, the volume is always exactly the same too it'll just sound weird.
Whilst I suppose it could be possible to have some sort of digital marker on content, similarly to how blockchain items work to verify authenticity it seems like a lot of trouble and expense when in most cases, Common sense and a little knowledge can be used to combat most deepfakes.