community background

Just About

Just About
Sturmer's avatar

Personally, I see a significant difference between video/image content and text content. So my vote here is - yes.

Most people believe what they see, so it's crucial to highlight whether a beautiful waterfall is a real location or generated by AI. When someone showcases incredible skills or performances, AI-generated content could mislead people into potentially dangerous situations.

With text, it’s a bit easier. We’re accustomed to reading fictional works. I appreciate the current search engine policy—it doesn’t matter who or what created the text, as long as it’s high quality and serves its purpose - it ranks. For decades, people have churned out low-quality content, and it’s often painful to read poor reviews or explanations. Last year felt like a breath of fresh air as the quality of posts improved to an almost pleasant level. I noticed a similar improvement when autocorrect became widespread, and later, services like Grammarly significantly raised the standard of human-generated text.

Dydo's avatar

Wow, I've been writting about this matter on another topic just now.

Well, I think we are on an "AI moment" that is dangerous due to the lack of regulation and understanding around it. People often overlook whether content is AI-generated because it evokes emotions and thoughts regardless. Not to mention copyright issues.

But I guess AI can be an interesting tool for content creation, offering possibilities like a jazz version of Travis Scott. It's something bands or artists might (and here we have another problem) never pursue on their own, and that would "deprive" us of that experience. We should adress types of content AI can work through and under what conditions could compose ethical and creative boundaries (like only public domain/a new creative commons licensing type).

That said, AI detection is the first thing we need to evolve. Either by "active detection", labeling AI generated content as such when it's detected (probably by another AI or algorithm lol), either by "passive detection", like YouTube is trying to do even before a video is published. After we can detect (at least 90% of it), we should strongly advise people of it's problems and benefits directly on the content. But I don't think banning it is the solution, think about the Prohibition era: there will always be someone doing it, and it's better to know who this person is, his motives and intentions, and make him do it the right way, than let him marginalized, harming people the way he wants (even if it's a crime and he gets caught, someone else will step in).

A filter, as you said, is a great idea too! It would set a whole new world in content consumption. I doubt anyone would only consume AI content, and is important to think of "AI artists" that could come to be (only if the problem of AI using and not crediting an artist was somehow adressed, which is not the case right now), using AI as a form of expression just like a pencil or an instrument is. But I'm just dreaming.

Braulio M Lara 🔹's avatar

I’m Relative new in the AI World and till now l don’t know how to use it

Because my mother language is Spanish and l’m speak English too sometimes l must grab some help from GOOGLE TRANSLATE 😁

But maybe in the future l learn to use the Artificial Intelligence that l find that could be a good gear ⚙️ for to make all better

I think that sooner or later the use of AI will be normal like to use a calculator to resolve mathematical problems

henhid's avatar

Generative AI has made it easier than ever to create content, but it's time to use it thoughtfully. Rather than blocking AI-generated content, I think labeling is the best way to give users a choice about whether or not to engage with it. AI should be a support, not an end. If the person cannot produce clear, coherent ideas themselves, then AI will only provide shallow, highly recognizable content without any authenticity.

In the broader view, generative AI bears immense effect on the platforms where authenticity is required. As much as this might speed up content creation, it could also diminish perceived value if audiences feel there is no human creativity involved.

All in all, clear labeling of AI-generated content can empower users to make informed choices while preserving the trust between them and the platforms they are using.

mastercesspit's avatar

synthetics should always be labeled, real talent promoted.

Paul's avatar

Would be happy with AI content labled, would prefer it to be banned.

Im completly against AI in the arts, be it image, video, music etc. Some things should just be left to human inspiration.

yan57436's avatar

I believe that AI is part of the future, we can't abhor it, but make it a tool. Obviously a bounty that asks for an idea or an experience can't and shouldn't use AI, but for example videos on youtube that people who don't have a good sense of video or even a good sense of the English language in their oratory can use as a proper communication tool.

Horror and Cats's avatar

I really enjoy making AI assets for thumbnails. That’s really the extent I use it for just because I’m a terrible artist unless I have an exact model to reference. I don’t think it should be disallowed, but it should be labeled.

I also think intentionally misleading AI should be punished in some way. Demonetization of the content, the entire channel, bans, etc.

If it’s clear you are trying to trick people whether for something as small as clicks or as big as politics and DON’T make it clear it’s satire, comedy, fiction, etc., you should receive disincentive that hurts.

Hunter's avatar

As a human I rather have the choice to see or hide Ai content, Now the way to determine this content is Ai or not is a question to ask, I think we are living in hard moment when Ai is getting better every day, and Human only content is gonna have a battle to win this 10X content produced in few minutes, more Agents based Ai tools are coming out every day with new powers.

I don't know maybe I see the doom of Human digital space, where it is not . Maybe I'm wrong and this knew tech will help us at the end ...

Dave's avatar

I have no problems with ai tools & use them myself. There’s a balance to be had with labelling, with when tools are used to assist in getting your message/idea across v the entire thing being non of your own thoughts.

The way tools are going if things has to be labelled for the slightest ai involvement then it will just become a blanket disclaimer small print that everyone ignores, saying ai may have been used in the creation process.

Where I really don’t like it being used is on Reddit, here and other “discussion” platforms where you want to have a discussion with a person & their thoughts/opinions & someone pastes replies from ChatGPT etc (not just using it for editing etc). If I wanted to know what it thought I would have asked it myself. This sort of use should just be deleted, let alone labelled.

Amoni P's avatar

I don't think generative AI has any place where creative work is being displayed. I'm staunchly anti-GenAI for the following reasons:

  • It cannot produce wholly unique works of art

  • It does not produce art that is good

  • It requires the theft of others' work to function with any semblance of verisimilitude

  • It does not demonstrate the creative power of the user who wrote the prompts

There may be a place for AI, but not GenAI and it certainly isn't here.

JHenckes's avatar

I think the ideal would be to split bounties with AI and without AI. Artificial intelligence will dominate everything, whether we try to avoid it or not, the important thing is to know how to set limits. I don't think it's urgent for the site, because the anti-AI policy works very well and allows the Just About community to be incredible! But in the future it will become obsolete, so it's important to define how AI will be used.

That AI will impact all areas is already a reality, it's been happening little by little, in medicine, computing, law. Everyone will be affected and has been affected.

CMDR Henckes's avatar

In my opinion the actual rules of AI here are ideal already. For bounties that you have to create designs and drawings the AI generative content are already banned from those bounties, as it should be.

For Ai that generate text is easy to detect just by reading and a lot easy if you use a detector. But it is almost almost impossible to know if someone rewrites it but it had some effort to do it so, so that is't a big problem, but it is a little sad you have a lot of effort and didn't win a bounty and some one the used AI and rewrote did!

But I've noticed that the Modders here are already keep an eye on this kind o situation, some times the AI create some wrong information and that reduce the chances of those people win the bounties using AI.

emoji

Join the conversation!

Some of the best conversations on the internet are happening here - and our users are getting rewarded for having them. Don't miss out!

sunglasses emojiemoji pointing left

Communities

There’s more to love

Help shape the future of our platform as we build the best place to express and enjoy your passions, whatever they may be.

Emoji

© Just About Community Ltd. 2025