YouTube will soon be able to stop sharing videos on sensitive or abusive topics by directly blocking the use of Share options if real-time analysis using AI algorithms raises suspicions about the “quality” of the material.
YouTube will not directly remove videos labeled as ‘borderline’ in the sense that while they do not explicitly violate Google’s posting rules, they may be considered offensive by certain categories of people. Instead, YouTube administrators will make every effort to ensure that these posts are viewed by as few people as possible, preventing them from being displayed as viewing suggestions in the Content Feed, and completely blocking the use of the Share option. This way, the suspicious content can only be discovered by directly accessing the associated YouTube channel, or with the help of direct links, possibly distributed through other messaging platforms.
Having to negotiate a real minefield, right on the border between reasonable moderation of freedom of expression and censorship, YouTube could in the first phase suggest to the user to reconsider the posted video. If the first version fails, with the user choosing to keep the clip on their YouTube page in its original form, its visibility will be severely limited, with other penalties following subsequent verification.
Important to know, suspicious videos will not necessarily be removed if they are posted, the moderators later determining what to do.
Responsible for constantly identifying potentially offensive posts, learning from what has been repeatedly reported by users, the AI system will constantly improve its effectiveness. Depending on the results, YouTube is likely to rely less and less on moderator teams.