The EU recently approved a comprehensive framework to regulate the development and use of AI. While copyright issues are included, they also propose regulation that scales with the risk associated with the use of AI.
Most cases will likely be low-risk, such as spam filters or our very own JAutoMod. The use of AI in areas such as critical infrastructure, education, healthcare, law enforcement, border management and elections will face far stricter regulation.
We’ve previously talked about AI content on Just About, but the internet is an ever-evolving landscape. AI is playing a greater role in our digital lives, and it’s important to discuss the opportunities and risks that it may present.
Do you think the focus on risk is the right approach, or is there another way to look at this? What specific uses of AI would you include, and where would they fit in this risk-based model?
Created at . Page last updated at .