The detection model used by LinkedIn to safeguard users against unsolicited messages takes into account user behaviours and interactions.
As a professional social network, LinkedIn provides several opportunities for purposeful use of the platform, but due to its fundamental nature, they also provide a medium for unwanted, swindling, pestering, or sexually suggestive messages.
Since private messages are considered to be setting foot in one's personal space, LinkedIn says that their findings conclude most users end up blocking such a person, or don't report it due to a fear of repercussions.
To tackle the under-reported nature of the issue, along with providing Community Policies, and reviewing reports, the platform has also deployed machine learning models to detect potential harassment within private messages.
LinkedIn has grouped sexually harassing messages into three categories:
Romantic Scams
Financial scams initiated through romantic or enticing messages, usually found in suspicious account signals.
Inappropriate Advances
LinkedIn is a professional networking platform, but some users confuse it with a dating website and try to solicit fellow members for romantic purposes.
Targeted Harassment
Targeted harassment, stalking, or trolling, to intentionally upset a user or groups of users may come in from fake or real accounts, says LinkedIn.
While the former two categories are addressed on the platform, this one is being planned on being addressed.
Also Read: LinkedIn launches new features for Sales Navigator
The machine learning harassment detection system is based on three models, built by studying reported cases, to address Inappropriate Advances and identify users sending harassing messages.
Sender Behaviour
This model is scored by platform usage, invitations sent, etc, and is trained by studying users who have been reported and confirmed to have conducted harassment.
Message Model
The content and context of harassing messages are scored by this model trained by scrutinizing messages reported and confirmed as harassment.
Interaction Model
How two members interact with one another is scored by the interaction model trained by using signals that result in harassment.
These models work in a sequence to detect and limit unsolicited messages. The scrutiny doesn't proceed to the next model if the previous model doesn't detect it as suspicious.
The system then triggers a newly launched feature if the message(s) are detected as harassing and then hides the message(s), with an option to un-hide and report them.
LinkedIn is looking at enhancing the harassment detection system with new modelling techniques, training data selection, and feature engineering.
New product experiences are also being explored to reduce harassment, and members who report harassment will be informed what the platform does with their reports, in the coming months.