How we protect your data when using third-party AI
At AskYourTeam, we understand the importance of protecting your personally identifiable information (PII) and take every precaution to ensure it's safe and secure.
That means as we explore new technologies like third-party Generative AI, your privacy remains our top priority. (Generative AI is a type of artificial intelligence that can produce text, images, and other content based on data and prompts provided.)
For example, let's consider our Insights Dashboard.
Before there is any interaction with the AI service, we implement techniques like:
- PII removal
- tokenisation, and
- data encryption.
You can find out more about how these techniques work and what else we're doing to protect your PII in the following sections.
1. We don't share PII with AI services
When you participate in surveys on our platform, we gather open-text comments to gain a deeper understanding of your feedback and opinions.
These comments might include PII like:
- organisation names
- personal names
- other employee or stakeholder names, or
- other sensitive details.
While we store some PII for platform functionality, we do not share any PIl with third-party AI.
If you would like to read AskYourTeam's legal statement on privacy and how we handle information, please see our Privacy Policy.
Here's a flowchart of how the process works:
2. We remove PII from open-text comments
To safeguard your privacy, we automatically detect and remove any PIl in open-text comments before sending them to third-party Al models, such as OpenAl's Large Language Model (LLM).
Our system identifies organisation names, people's names, and other sensitive details, replacing them with placeholder data while keeping the context and sentiment of your comments intact.
This process helps ensure your PII remains protected and your identity stays anonymous.
3. We don't allow AI models to learn from data it's provided
When they're stripped of any PII, your comments are processed by the LLM without directly contributing to the model's learning.
Using OpenAl's premium model ensures that the details you provide do not get absorbed into the system's knowledge base.
Your comments are processed within the Al model's existing knowledge only. They don't contribute to or modify its foundational learning. This approach safeguards your data's integrity while leveraging the Al's collective intelligence for accurate and relevant insights.
4. We keep your identity separate from LLM insight generation
When the LLM generates insights, it doesn't retain any PII from your participants' answers. These insights are also stored on separate servers. They're not stored on servers associated with individual identities.
By keeping your identity separate from insight generation, we create a robust privacy framework. So, as you take surveys and provide answers or feedback, you can be confident that your data is secure and your privacy is respected throughout the process.
5. We use encrypted tokens
When the AskYourTeam system receives insights from the LLM, PII re-association begins. Re-association means securely matching and restoring redacted PIl to its original context.
This re-association is done by using encrypted tokens that were created before sending the data to the LLM. Post-re-association, the temporary tokens are discarded, and system checks confirm data integrity.
This process ensures the accurate merging of information so we can provide you insights while maintaining the highest levels of privacy and security.
6. We use best practice data security
Ensuring the security of user data is our top priority at AskYourTeam.
We use robust, industry-standard security measures to safeguard your data from unauthorised access, alteration, disclosure, or destruction.
Our infrastructure is designed with security in mind, implementing advanced encryption protocols, firewalls, and intrusion detection systems to protect your data during transmission and storage.
Our team of security experts also continuously monitors and reviews our security protocols to proactively identify and address potential vulnerabilities.
If you have any questions about how we use AI and security, please get in touch!