Intelligence artificielle

Quel est le problème avec ChatGPT dans les centres de contact ?

Ben Rigby, Talkdesk

By Ben Rigby

0 min de lecture

Chatgpt Contact Center Ethics

Explorer les préoccupations éthiques de ChatGPT dans les centres de contact.

I’ve been writing feverishly lately about the opportunity ChatGPT (and other generative AI systems) presents for the contact center. At Talkdesk, we believe that the generative pre-trained transformer (GPT) large language models (LLMs) powering systems like ChatGPT are going to be the backbone of the next generation of contact centers as a service (CCaaS).

We’ve launched our first feature that harnesses these capabilities to automatically summarize customer conversations and accurately select their dispositions (e.g., requests follow-up, wants to cancel service, etc.)—effectively eliminating most of a customer service agent’s after call work. And we are actively integrating these LLMs into other core contact center use cases, as well as shaping our roadmap to accelerate the value our customers can gain from these natural language processing techniques.

Mais nous sommes également conscients de la puissance de cette nouvelle technologie et de son potentiel à causer des préjudices tout autant qu’apporter des bienfaits.

A number of prominent tech leaders—including Elon Musk, Steve Wozniak, and Andrew Yang—are publicly calling for a pause in the training of AI systems more powerful than ChatGPT-4. Their general ethical concern is that runaway AI will overtake humanity’s ability to harness it for good.

With these big names weighing in on some negatives of AI systems, I thought it would be a good time to delineate some of the top ethical concerns of these new LLMs for the contact center. Any company considering using these AI models in their contact center should be aware of the potential pitfalls and consider processes that will help ensure safe and positive customer experiences.

Vous souhaitez en savoir plus ?

Contactez notre équipe pour découvrir comment Talkdesk peut améliorer vos solutions logicielles de centre d’appels.

RÉSERVER DÉMO

Cinq préoccupations éthiques liées à ChatGPT dans les centres de contact.



1. La transparence.

The first ethical concern with using ChatGPT and other generative AI systems in the contact center is transparency, which applies both to being clear about how an AI-driven decision was made as well as the fact that it was made by AI in the first place. Customers have the right to know if their interaction was mediated by AI. If customers believe they are talking to a human and later discover that they were talking to a chatbot, for example, it could erode trust in the company and damage customer relationships with the brand. Full disclosure to the customer is crucial when designing chatbots and virtual agents.



2. Les Biais.

Another ethical concern is the potential for bias in the responses of chatbots and virtual agents. ChatGPT is a machine learning model that is trained on vast amounts of text data, and there is a risk that this data could be biased in some way. If, for example, an automatic quality review favors one pattern of speech over another, this could lead to unequal treatment of agents whose backgrounds included learning that pattern of speech. In other words, this type of bias can lead to discrimination.



3. Sécurité des données et protection de la vie privée.

When using ChatGPT, user interactions are logged and stored, and personal conversations could be accessible to anyone if proper security measures are not in place. As with any software system handling personally identifiable data, data flowing through the LLM needs to remain encrypted end to end and follow all of the data privacy considerations that keep customers’ data safe.



4. Des résultats véridiques.

Une autre préoccupation éthique concerne la possibilité que ChatGPT "hallucine" ou invente des réponses qui pourraient ne pas être  véridiques, mais qu'il communique ces faits alternatifs de manière convaincante. Bien entendu, nous ne voulons pas fournir de fausses informations à nos clients, surtout s'ils les utilisent pour prendre des décisions importantes dans leur vie.



5. Les pertes d'emplois.

Finally, there is the concern of job displacement. As more companies turn to AI-based chatbots like ChatGPT, there is a risk that human contact center agents could lose their jobs. Companies must consider the impact that automation could have on their employees and take steps to mitigate any negative effects. In particular, contact center managers should imagine new opportunities to increase agent satisfaction by introducing tools that will automate menial tasks and help agents complete their work faster.

ChatGPT et le centre de contact du futur

E-book

ChatGPT et le centre de contact du futur

Découvrez comment cette nouvelle technologie passionnante va tout changer, de l'IA conversationnelle au rôle du téléconseiller du centre de contact.

Une Pause pour GPT-4 ?

La rapidité des récents progrès dans le domaine des LLM (modèle de langage massifs) a été stupéfiante, et il existe des préoccupations légitimes, comme indiqué plus haut. Le développement rapide de l'IA sans garde-fous (à tout le moins la prise en compte et l'atténuation des conséquences négatives) pourrait entraîner la concrétisation de ces conséquences négatives à une échelle bien plus grande qu’à l’heure actuelle. 

The Future of Life Institute (FLI)—the author of the open letter calling for a pause—wants to ensure that AI technology is developed in a way that benefits humanity and avoids potential risks and dangers. They are concerned that the major AI labs are in an “out-of-control” arms race, and currently, there is no proper oversight ensuring they are working in an ethical manner.

Nous suivrons attentivement l'évolution de ces discussions. Mais notre engagement envers nos clients et les millions de personnes qui interagissent chaque jour avec nos systèmes logiciels est d'évaluer en permanence l'impact de nos décisions au regard des considérations éthiques exposées ci-dessus.



ChatGPT peut-il être utilisé de manière éthique dans les centres de contact ?

In short, yes…with the appropriate precautions. For the LLM-powered features that we’ve already delivered to market and those we are developing, we at Talkdesk have already taken steps to minimize the potential ethical issues described above. We allow for transparent decision-making in the development process. We include modification by a human reviewer to remediate bias if it should exist. We maintain an absolute commitment to data privacy and security. We include anti-hallucination techniques in all our product designs to ensure no misinformation reaches our clients or their customers. And we give careful consideration to the impact of LLMs on human agents—with an eye toward imagining expanded roles for humans in the age of automation (I recently wrote about the concept of steering agents that oversee a team of bots).

Bien que les LLM amélioreront la qualité du service pour des millions d'entre nous qui cherchent de l'aide et du soutien auprès des entreprises que nous aimons,il incombe aux ingénieurs logiciels, aux concepteurs et aux responsables de produits de prendre en compte et d'aborder ces questions éthiques lors de la construction.



Découvrez comment ChatGPT va révolutionner le service client.

Join me and Brooke Lynch, CCW Digital’s Senior Analyst, to learn how ChatGPT and generative AI is going to fundamentally change our understanding of customer service and the role of the contact center. In this conversation, I will explain why ChatGPT is so revolutionary and the impact it’s having on natural language processing. I will also share my vision of the role of the customer service agent in the era of generative AI. Watch the full webinar to see how ChatGPT is transforming customer service.

Ces informations vous sont-elles utiles ?

Découvrez comment Talkdesk peut améliorer la qualité de vos expériences client.

OBTENIR UNE DÉMONSTRATION

PARTAGER

Ben Rigby, Talkdesk

Ben Rigby

SVP, Global Head of Product & Engineering, Growth chez Talkdesk, une licorne de logiciel de contact center as a service (CCaaS). Il a précédemment dirigé l'IA chez Directly : Automatisation du service client avec des téléconseillers virtuels alimentés par l'IA ; CEO de Sparked.com : Utilisation de modèles d'apprentissage automatique pour prédire le taux de pertes de clients, la fidélisation et la durée de vie ; Ingénieur logiciel chez The Main Quad : racheté par Student Advantage ; Responsable de l'ingénierie du site Web grand public de The North Face pendant cinq ans ; CTO d'une startup SaaS comptant parmi ses clients Sam Adams, Hyundai, Old Navy, IBM, le Sierra Club et Scion.