71% of senior IT leaders have concerns that generative AI will introduce security risks to their data. Even so, more than half surveyed are experimenting with – or implementing – generative AI within their companies, with 84% believing it will help them better serve their customers.
Companies need to know where their data is. With a database, you can lock access down to the cell level to keep it safe and define how it gets used. But that’s not how generative AI works. Generative AI doesn’t “store” data; it “learns” data. That’s a whole different situation.
Mitigating risk must extend beyond data security when using generative AI, whether that is with your own data and systems or third-party AI solutions and inputs. Explainability and toxicity also factor into this trust equation. Explainability is about understanding how generative AI sourced the output and being able to validate it. Toxicity is the potential for generative AI to create harmful, stereotyped, offensive, or misleading outputs.
Both explainability and toxicity impact your ability to trust the outputs of generative AI. If you can’t verify the outputs or if they’re misleading, the results can damage your brand, your customer relationships, and your business. Yet, even though 39% of respondents to a McKinsey survey considered the risk of explainability to be relevant, only 18% of them are working to mitigate this risk.
At Aria, we see huge opportunities in using generative AI to impact the end-to-end customer experience. The potential grows when you use proprietary data from your CRM and billing platform. While 74% of executives believe the benefits of generative AI outweigh the associated risks, we also know the importance of helping companies understand how to get started safely. Your company will move forward more easily if you choose a generative AI solution with a well-orchestrated, built-in security and trust layer. Your workforce will be more productive, using only relevant and trusted outputs in their work.
Read on to learn the key factors a security and trust layer addresses to mitigate risk, and understand why humans make great “co-pilots” when using generative AI to transform the end-to-end customer experience.
Increase productivity without leaking data
The fastest and best course of action is to use partners with platforms designed to keep you in control of your data. This means making sure they have a trust layer between your data and the model that learns or processes data. A trust layer allows you to choose the right model for the right task – giving you flexibility and a wider range of use cases.
Masking data so that personally identifiable information (PII) doesn’t ever travel across the cloud, and deleting the prompt so it never gets “learned,” are two important considerations when working with large language models (LLMs). Dynamic grounding and an audit trail that ensures data doesn’t leak are additional considerations.
Get relevant outputs without compromising trust
A lot is said about training the model, but it’s much simpler than this. The answer is prompts – the grounding you give to the LLM by asking it questions. Prompts are used to train LLMs as they learn to better understand the constructs you give them. The better and more in depth the prompt, the better and more relevant the output.
Doing this well means including information that should remain proprietary and protected within your datasets. But, in using many models, you may expose this information to the model to learn, risking compliance violations. Or worse, leaking PII or intellectual property (IP).
This is another reason why it is important to choose a platform with built-in guardrails, including secure data retrieval and zero retention of context, aka masking. This also includes configuring the LLM to ensure the prompts, prompt enrichment, and the answers provided, will be forgotten – as if you never asked.
Dynamic grounding – the process of aligning generative AI output with the intended context and purpose to enhance its accuracy and relevance – is a service in the trust layer that builds confidence in the validity of the outputs. While explainability is still a field in development – and becoming more important with the evolution of generative AI – dynamic grounding is a step toward ensuring relevance without compromising trust.
Keep humans in the “co-pilot” seat
Human intervention remains necessary in many use cases, but to varying degrees and at varying points in the process. This ranges from fact checking to simply accepting or declining the output. Routine, predictable tasks are more likely to fit the accept/decline model, whereas other, more nuanced tasks that need empathy, reasoning, and fact verification require more input. And it’s even better if your AI technology has toxicity detection to flag potential expressions of bias and stereotyping for review before it can impact your brand.
Taking a “co-pilot” approach is a great model to start with. The capacity for generative AI to augment human roles, accelerating productivity while assisting in producing more personalized, resonant customer experiences, for example, brings tangible value while providing the needed safeguard of human review. In fact, estimates of productivity increases are very favorable in this scenario. Allow AI to work on repetitive tasks but keep humans in the flow when letting AI work on more complex issues that require empathy and reasoning capabilities.
One example of the benefits of a “co-pilot” approach is convincing a subscribed customer to stay. While the human agent empathizes with the customer’s situation, generative AI can access their billing information, recommend an alternative or – based on customer lifetime value – offer a calculated discount to retain the customer profitably.
Close the AI trust gap
Closing the AI trust gap is also about assuring your employees and customers that you’re using it responsibly. Transparency, guidelines, and upskilling are critical for ensuring your employees have guardrails for using generative AI in their workflows and in using its outputs. With more confidence in generative AI, you’ll be able to focus on connecting with your customers in new, more relevant, and more personalized ways across the entire concept-to-care process.
Read our eBook on Transforming the End-to-End Customer Experience with Generative AI to learn more