Wow, I've been writting about this matter on another topic just now.
Well, I think we are on an "AI moment" that is dangerous due to the lack of regulation and understanding around it. People often overlook whether content is AI-generated because it evokes emotions and thoughts regardless. Not to mention copyright issues.
But I guess AI can be an interesting tool for content creation, offering possibilities like a jazz version of Travis Scott. It's something bands or artists might (and here we have another problem) never pursue on their own, and that would "deprive" us of that experience. We should adress types of content AI can work through and under what conditions could compose ethical and creative boundaries (like only public domain/a new creative commons licensing type).
That said, AI detection is the first thing we need to evolve. Either by "active detection", labeling AI generated content as such when it's detected (probably by another AI or algorithm lol), either by "passive detection", like YouTube is trying to do even before a video is published. After we can detect (at least 90% of it), we should strongly advise people of it's problems and benefits directly on the content. But I don't think banning it is the solution, think about the Prohibition era: there will always be someone doing it, and it's better to know who this person is, his motives and intentions, and make him do it the right way, than let him marginalized, harming people the way he wants (even if it's a crime and he gets caught, someone else will step in).
A filter, as you said, is a great idea too! It would set a whole new world in content consumption. I doubt anyone would only consume AI content, and is important to think of "AI artists" that could come to be (only if the problem of AI using and not crediting an artist was somehow adressed, which is not the case right now), using AI as a form of expression just like a pencil or an instrument is. But I'm just dreaming.