Submissions (20)

A
Asim7/28/2024

$2

I think one think we could do to fix social media is having two seperate versions of certain apps. Back in the day when the first iphone came out you could pay 99p for an app or you could download the free lite version and have very limited features

I think the premium version of an app should be free and the charge is you need to use your real first name (no surname) and a real photo of yourself verified against some ID. I think people forget that behind random no profile usernames or even faces on a screen that there are real people with real feelings and emotions and there is no room for hate

The lite version is for people who refuse to sign up with their real info and they are limited to 1 hour of scrolling a day and lets say 5 tweets. Any abuse will result in IP bans that last 24 hours. Hopefully it will just keep the trolls on the lite version and the people who really want to engage and have a positive experience on the premium version.

I'd have more pop ups that encourage you to block people and maybe hourly pop ups that encourage you to check in with your wellbeing or maybe get you to do a 2 minute breathing exercise and you can skip adds for 30 minutes on all videos

Everyone hates waiting for things, so make the trolls wait longer for their abuse and encourage people to do good things for their mental health to help skip ads!

Limal's avatar
Limal7/28/2024

$2

This might address some issues mentioned above:

Implementing a peer review and evaluation system on social media platforms can help mitigate the spread of misinformation and ensure the credibility of shared content. Similar to the scientific community, where research is reviewed by experts before publication, social media content could be subjected to a review process by knowledgeable users.

How It Works:

1) When a user wants to post content, especially claims or information that could impact public opinion, it would be submitted for review.

2) A diverse group of users with expertise or interest in the relevant topic would evaluate the content for accuracy, context, and reliability. This group could be selected based on their past contributions and credibility.

3) Reviewers would provide feedback and rate the content on a scale of credibility. Constructive criticism and suggestions for improvement would be encouraged.

4)Based on the peer review, the content would either be approved for posting, revised according to feedback, or flagged for further scrutiny if found misleading.

Benefits - enhanced credibility, critical thinking encorage, reduces Misinformation, promotes Constructive Dialogue.

We seen quite similar in wikipedia too. While you can freely post pics of your dinner, any mission critical information will be verified by the experts. And this is quite close to that we have in JA!

Toretto 70's avatar
Toretto 707/28/2024

$2

This is very challenging, especially since our younger generation seems less polite to older people, I have a 3 suggestion to make it better

  1. Establishing Community Standards : create transparent rules thet define what constitutes hate speech, misinformation, and harassment.

  2. investing AI and Human Moderator : use advanced algorithms to detect harmful content while employing human moderators to review complex case, ensuring nuanced understanding.

  3. Encouranging User Reporting : simplify the process for user to report inappropriate content and ensure timely responsesfrom moderator teams.

JB
Josh B7/28/2024

$2

Social media is so vast I can see how it's such a difficult medium to manage but it it can be a highly toxic and damaging means to bully or just generally inflict cruelty on others.

  1. I don't believe those under high school age should be able to access these platforms and some sort of parental agreement should be part of the registration for anyone under 16, to accept some level of responsibility for their children's actions/words until they themselves are an adult.

  2. I agree with a lot of the posts on this bounty that AI should play a part. Algorithms should be set up to flag posts with buzz words/phrases to help highlight posts for human review. This would mean that while there would still be a human burden on proper detailed checks of posts but the AI element would flag a proportion of content that could be deemed inappropriate/offensive

  3. I think school education certainly comes into play too. You'll never be able to stop bullying entirely, but school education in the impact these things have on people and the outcomes from such traumatic cases. These shock factors can only help in these instances

Sturmer's avatar
Sturmer7/28/2024

$2

Remove the Share Button

One of the most destructive features of social media is the share button. It allows manipulative posts to spread rapidly, achieving specific emotional reactions without proper context. People often share content that aligns with their views without fact-checking, which leads to the widespread dissemination of misinformation. By removing the share button, the spread of false information would be significantly limited, exposing only a small number of people to potentially misleading content.

Social media has become a battleground for politics. In the 1940s, aircrafts dropped propaganda leaflets; now, the same tactics are used in the form of tweets and posts. Removing the share button would mean that misinformation and propaganda wouldn't reach millions instantaneously. Instead, any false information would be more likely to be spotted and discredited before gaining traction, preserving the trust and integrity of the platform.