Trust Center
AI at Nitro
Built responsibly with your privacy and security at the forefront. Learn more about Nitro's approach to AI and how we protect your data.

Purpose and Data Handling

At Nitro, our AI tools are designed to enhance document processing efficiently and securely. We don’t create our own Generative AI models or share your data with third parties for training their models. Instead, we use established AI models, ensuring that your data is protected and used only for the tasks you request.

Nitro’s Document Assistant, our primary AI tool, is powered by ChatGPT and hosted on Microsoft Azure OpenAI. This ensures your data remains within a secure environment, which Nitro controls. Microsoft Azure OpenAI follows responsible AI principles to safeguard data and maintain high standards of security. You can learn more about Microsoft’s AI principles here.

Code of Conduct

Nitro adheres to Azure’s responsible AI guidelines, including:

  • Human oversight: Ensuring development is reviewed by humans.
  • Prevention measures: Implementing safeguards against misuse of AI in Nitro or use for fraud.
  • Thorough testing: Rigorous testing of applications to identify and fix potential issues.
  • Feedback channels: Allowing users to provide feedback on AI tools.
  • Defined scope: Using AI tools only for their intended purposes.

We have robust procedures in place to address any concerns related to our AI tools. Each tool or system undergoes a detailed internal review and assessment before it is integrated into our standard Software Development Life Cycle (SDLC) process. This ensures that all AI tools meet our quality and security standards. Additionally, we actively monitor user feedback to identify and resolve any issues promptly. This ongoing feedback loop helps us continuously improve our AI tools and maintain high standards of performance and reliability.

Data Privacy and Security

Nitro’s Document Assistant uses advanced AI models, including ChatGPT’s Large Language Models (LLMs) and Retrieval Augmented Generation (RAG) technique, to provide the responses relevant to the user. We also use prompt engineering to improve accuracy of results.

Your data privacy is our priority:

  • Your data is not shared: Your questions and responses are only accessible to you and not visible to other customers or Nitro employees.
  • Your data is not used for training: Your data is not used to train Azure OpenAI or other models.
  • Your data is secure: We use encryption to protect your data during transmission and storage, and access is securely managed.

While Document Assistant is in Beta mode it will be enabled for customers who purchased online; in larger organizations, administrators must opt-in to enable AI tools for their users. Any personal data in your documents is used only temporarily and deleted once the task is complete.

For more details on our security and privacy practices, please visit Nitro’s Trust Center.