One of the biggest challenges and fears that come with AI is a concern for content safety and truthfulness. The advent of easily accessible AI features has added a new risk surface in almost every solution that embraces AI for productivity and other benefits.
We have a new writer on the team who’s been known to make stuff up.
As is often the case for all new technology, it tends to improve. But until it does, we may need to monitor this new team member in some circumstances.
With that looming crisis for CSOs in mind I managed to make this work in Coda for Coda AI.
To be clear, these are just sample demos - AI content on the left, and VERIFAI’s interpretation of it on the right. It is not intended to be used this way, but this is an ideal way to show what it does. However, fact and safety verifications ARE intended to be used by whomever can benefit from them and wherever it makes sense. Of course, Coda makes this easily possible.
Regardless of your testing and monitoring protocols, AI issues are difficult to mitigate; you can never guarantee perfection, so plan how to spot and deal with problems that arise upfront. Common approaches include setting up a monitored channel for users to share feedback (e.g., thumbs up/down rating) and running a user study to proactively solicit feedback from a diverse mix of users — especially valuable if usage patterns are different from expectations.
Using the VERIFAI Pack, it is possible to monitor AI components in the background and deliver assessments through other packs or webhooks to other data targets. Imagine a log of instances where fiction has been presented in Coda documents by AI components that were unintended. I’m pretty sure that’s exactly what management, Makers, and CSOs are needing as potentially poorly-constructed AI prompts are about emerge across Coda document systems.
VERIFAI provides the underlying framework to begin to start building AI features while embracing productive approaches to ensuring safe and healthy AI outputs. I’m happy to chat with anyone who has more stringent compliance and monitoring requirements.
My question to the community - is this needed?