Qual é o problema com o ChatGPT no centro de contacto?

By Ben Rigby
0 minutos de leitura

Explorar as preocupações éticas do ChatGPT no centro de contacto.
I’ve been writing feverishly lately about the opportunity ChatGPT (and other generative AI systems) presents for the contact center. At Talkdesk, we believe that the generative pre-trained transformer (GPT) large language models (LLMs) powering systems like ChatGPT are going to be the backbone of the next generation of contact centers as a service (CCaaS).
We’ve launched our first feature that harnesses these capabilities to automatically summarize customer conversations and accurately select their dispositions (e.g., requests follow-up, wants to cancel service, etc.)—effectively eliminating most of a customer service agent’s after call work. And we are actively integrating these LLMs into other core contact center use cases, as well as shaping our roadmap to accelerate the value our customers can gain from these natural language processing techniques.
Mas também compreendemos a magnitude desta nova tecnologia e o potencial que tem para fazer mal e bem.
A number of prominent tech leaders—including Elon Musk, Steve Wozniak, and Andrew Yang—are publicly calling for a pause in the training of AI systems more powerful than ChatGPT-4. Their general ethical concern is that runaway AI will overtake humanity’s ability to harness it for good.
Com estes grandes nomes a ponderar alguns aspectos negativos dos sistemas de IA, pensei que seria um bom momento para delinear algumas das principais preocupações éticas destes novos LLMs para o contact center. Qualquer empresa que considere usar esses modelos de IA no seu contact center deve estar ciente das possíveis armadilhas e considerar processos que ajudarão a garantir experiências seguras e positivas para os clientes.