You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Since many people use the bot for venting about mental health issues, occasionally someone will send a message that is possibly suicidal.
What can we do to fix that problem?
When a user sends a message, we can search for any words related to suicide, and if any appear, we can respond with a message that lists suicide prevention resources.
The text was updated successfully, but these errors were encountered:
Yousef-Bulbulia
changed the title
Display suicide prevention resourcs after detecting possibly suicidal messages
Display suicide prevention resources after detecting possibly suicidal messages
Dec 5, 2021
We could use NLP to accomplish this. There is a repository that gathers data sets from subreddits where suicidal comments are common and detects if other comments are suicidal. https://github.com/hesamuel/goodbye_world.
Yeah. Since it's written in Python, I think it would be easier to create a stand-alone REST API using a library like Flask. The bot sends a request to it with a message payload and it returns how likely it is to be suicidal.
What problem are you facing?
Since many people use the bot for venting about mental health issues, occasionally someone will send a message that is possibly suicidal.
What can we do to fix that problem?
When a user sends a message, we can search for any words related to suicide, and if any appear, we can respond with a message that lists suicide prevention resources.
The text was updated successfully, but these errors were encountered: