We’ve had a couple of suspected cases of members using generative AI to produce bounty submissions, so now’s a great time to have a chat about our approach to AI-generated content on Just About.
One of the trends that inspired Just About was that of traditional media spreading itself thin, leaving domains of expertise behind - along with content quality and niche audiences - to contest only the biggest queries on Google.
If anything, that trend has worsened. We’ve seen once-mighty media brands attempt to cast the same wide net while laying off expert writers, and some have even published entirely AI-written content unedited and full of inaccuracies. Against the enshittification of the wider internet, we want to provide a fertile seedbed where authentic fans’ expertise can flourish. We believe there’s no substitute for the real thing.
Bounties are an important watering can to our seedbed. We want the content that we curate from bounty submissions to be an asset to their communities. They can - and our job is to ensure that they will - be the best resources of their kind on the internet, because they draw on the native expertise of the community.
This gives us a handy razor for the thorny AI question: does it help or hinder these goals?
We’re not imposing a blanket ban on any and all AI-generated or -aided content. But if a member simply copy-pastes from ChatGPT into a bounty thread, it probably won’t meet the standards of quality that lead members to vote for it, or that we want to hit with our curated content. We will likely reject such submissions just as we would any others that don’t benefit their communities.
There’s also the risk of copyright infringement. Current generative AIs are trained on existing content, so their words are coming from somewhere. If they repeat those words in a way that breaches copyright, we can’t accept that on the platform, and may have to apply the measures in our copyright policy to any member that posts it. This will make us and you sad, so please read that policy and take whatever measures are necessary to avoid its wrath.
For our part, there’s plenty we can do to deter the antisocial use of AI. We’ll try to set bounties that can best be answered by expert humans rather than AI. We’re also having a parallel conversation about our moderation and detection practices; a manual flagging system, possibly with rewards for folks who report inauthentic or inappropriate content, is under discussion, and low-quality AI content can be part of this.
All of this said, we’re open to the ways in which AI could enhance Just About. We don’t see an issue with members using AI as a starting point for their bounty submissions: perhaps it helps your process to see what a generic answer looks like, or how it’s structured, or to ask ChatGPT for ideas. These tools can do a lot, and not all of it is harmful.
Where AI can make cool stuff or detection problems prove intractable, we might embrace it. We’re sensitive to ethical concerns about art AIs like Midjourney which are trained on the work of human artists, and again our copyright policy will apply. But perhaps AI tools could form part of a process alongside human input; give them an asset and ask for an interesting manipulation, or use them to generate in-game challenges with a bounty task to try them out? In short, is there a way to use these tools to have fun or make something useful while remaining true to our values?
That’s not a rhetorical question - what do you think? As in all things, we want to build Just About in concert with you. This post sets out our goals and principles, but rules and policies can come after we’ve heard your feedback, so don’t be shy!
We’ll see you in the comments.