Latest update: April 10, 2024
This article was first published on August 30, 2023, about three months after the Just About Alpha first went live, which speaks to how quickly we received our first few suspected cases of members using generative AI to produce bounty submissions. It's now a living document outlining our approach to AI-generated or -assisted content on the platform.
Our goals
One of the trends that inspired Just About was that of traditional media spreading itself thin, leaving domains of expertise behind - along with content quality and niche audiences - to contest only the biggest queries on Google.
If anything, that trend has worsened. We’ve seen once-mighty media brands attempt to cast the same wide net while laying off expert writers, and some have even published entirely AI-written content unedited and full of inaccuracies. Against the enshittification of the wider internet, we want to provide a fertile seedbed where authentic fans’ expertise can flourish. We believe there’s no substitute for the real thing.
Bounties are an important watering can to our seedbed. We want the content that we curate from bounty submissions to be an asset to their communities. They can - and our job is to ensure that they will - be the best resources of their kind on the internet, because they draw on the native expertise of the community.
This gives us a handy razor for the thorny AI question: does it help or hinder these goals?
The risks
We’re not imposing a blanket ban on any and all AI-generated or -aided content. But if a member simply copy-pastes from ChatGPT into a bounty thread, or indeed a community discussion, we're concerned about the following:
It adds no unique value. Anyone can do this. It requires no experience that is specific to the writer. It contains no authenticity, no knowledge, no passion, no humanity.
Even where such submissions contain useful content - as they may - to paste them verbatim into our platform may indicate a low-effort attempt to snatch bounty rewards rather than to add real value to the community. That's not fair on those who are making an effort, and those are the people we want to encourage.
For many bounty prompts, such submissions probably won’t meet the standards of quality that lead members to vote for it, or that we want to hit with our curated content.
The specifics may vary by case, but repeated or individual instances of the above may be considered antisocial behaviour as outlined in our Community Code of Conduct.
There’s also the risk of copyright infringement. Current generative AIs are trained on existing content, so their words are coming from somewhere. If they repeat those words in a way that breaches copyright, we can’t accept that on the platform, and may have to apply the measures in our copyright policy to any member that posts it. This will make us and you sad, so please read **that policy** and take whatever measures are necessary to avoid its wrath.
In short: low-effort use of generative AI anywhere on the platform is likely to be judged antisocial. Not only will we moderate such content, just as we would any that doesn’t benefit our community, but we will apply appropriate sanctions up to and including permanent bans for repeat offenders.
NB: There may be rare exceptions to this approach, such as bounties that may call specifically for AI submissions (see below), but this will be clearly expressed in the bounty copy.
Mitigation
For our part, there’s plenty we can do to deter the antisocial use of AI. We’ll try to set bounties that can best be answered by expert humans rather than AI. We’re also having a parallel conversation about our moderation and detection practices; a manual flagging system, possibly with rewards for folks who report inauthentic or inappropriate content, is under discussion, and low-quality AI content can be part of this.
All of this said, we’re open to the ways in which AI could enhance Just About. We don’t see an issue with members using AI as a starting point for their bounty submissions: perhaps it helps your process to see what a generic answer looks like, or how it’s structured, or to ask ChatGPT for ideas. These tools can do a lot, and not all of it is harmful.
Where AI can make cool stuff or detection problems prove intractable, we might embrace it. We’re sensitive to ethical concerns about art AIs like Midjourney which are trained on the work of human artists, and again our copyright policy will apply. But perhaps AI tools could form part of a process alongside human input; give them an asset and ask for an interesting manipulation, or use them to generate in-game challenges with a bounty task to try them out? In short, is there a way to use these tools to have fun or make something useful while remaining true to our values?
That’s not a rhetorical question - what do you think? As in all things, we want to build Just About in concert with you. This post sets out our goals and principles, but rules and policies can come after we’ve heard your feedback, so don’t be shy!
We’ll see you in the comments.
Created at . Page last updated at .