Will AI language models like ChatGPT replace jobs in the future or lead to job losses? While many are asking and discussing this question, for others the answer has long since become a reality.
Executives at the American non-profit National Eating Disorders Association (NEDA) have decided four days after employees formed a union to simply replace hotline workers with a chatbot called Tessa — and it’s not even good.
“It’s plain and simple about breaking up the union”
NEDA, the largest eating disorder nonprofit, operated since twenty years a helpline that provided support to hundreds of thousands of people via chat, phone and text. According to the company, the chatbot was launched to better serve people with eating disorders. According to Noch Helpline employee Abbie Harper, the bot serves just to bust the union.
According to Harper, the helpline consists of six paid employees, some supervisors, and up to 200 volunteers at any one time. A group of four full-time employees at NEDA, including Harper, decided to organize because they felt overworked and understaffed. They demanded more training and promotion opportunities from NEDA. They didn’t even ask for a raise.
The company refused to recognize the union, so the employees petitioned and won an election to the National Labor Relations Board. Just four days after the election results were confirmed, all staff were notified that they dismissed and replaced by a chatbot become. The volunteers should stop providing individual support and act as testers for the bot.
The chatbot in question is called Tessa and was created by a medical school team led by Dr. Ellen Fitzsimmons-Craft. He has been trained to specifically address body image issues and to use therapeutic methods. However, the answers Tessa can give are limited as the bot is not based on GPT, only one rule-based and directed entertainment used. dr Fitzsimmons-Craft himself suggests that this not a substitute for a real person can be.
The American colleagues from Motherboard Vice have tested Tessa. Right at the beginning of the conversation, you are told that it is a chatbot. The bot responded in the test not to lyrics like »I feel depressed« or »I hate my body«.
The laid-off employees will be 01. June be active. From this date onwards, the NEDA hotline will only be “supervised” by Tessa.
What do you think of this decision? Do you think a chatbot could effectively replace such a job if a better AI model were used? Or is that not an issue at all? Do you think NEDA will regret this decision? Where do you see the future risks associated with chatbots like ChatGPT? Tell us what you think about it in the comments!