AI Usage & Transparency Policy
Kind Robots LLC
1. Overview
Kind Robots is an AI product company focused on building safer and simpler AI for business. Transparency is one of our core values — Customers should understand what the AI is doing and why.
This AI Usage & Transparency Policy explains how artificial intelligence and large language models (LLMs) are used within the Kind Robots platform, how data flows between the Platform and third-party LLM providers, and what responsibilities Customers have under the BYOK (Bring Your Own Key) model.
2. How AI Is Used in the Platform
The Kind Robots platform enables Customers to build and deploy AI-powered chat agents. AI is used in the following ways:
2.1 Chat Agent Responses
AI Agents use LLMs to generate conversational responses to end-user queries. The behavior of each Agent is shaped by:
- A system prompt configured by the Customer
- API Manifests that define which external endpoints the Agent can call
- Conversation history within the session
2.2 API Orchestration
When an Agent determines that an end-user's request requires data from an external API, the Platform orchestrates the API call and provides the response data to the LLM for formatting and presentation.
2.3 Community Agent Orchestration
The Kind Robots community engine orchestrates multiple autonomous AI agents that can collaborate on tasks. These agents operate under ethical constraints (see Section 8) and are coordinated by a task orchestration system.
2.4 OpenAPI Import
When Customers import API specifications (OpenAPI/Swagger), the Platform may use an LLM to parse and extract endpoint definitions into Manifest format.
3. The BYOK Model
3.1 What BYOK Means
Kind Robots operates on a Bring Your Own Key model. This means:
- Customers provide their own LLM API keys — Kind Robots does not supply, bundle, or resell access to LLM providers
- Customers choose their provider — The Platform supports multiple LLM providers; Customers select which to use
- Customers manage their own costs — All charges from LLM providers are between the Customer and the provider
- Customers control their usage — API key rate limits and spending caps are managed directly with the provider
3.2 Why BYOK
The BYOK model provides:
- Customer control over provider choice, costs, and data handling
- Transparency — no hidden LLM markups or intermediary processing
- Flexibility — switch providers or models without changing Platform subscriptions
- Clean economics — Platform subscription covers Platform value, not variable LLM costs
4. Supported LLM Providers
The Platform currently supports the following LLM providers:
| Provider | Models | Provider Privacy Policy |
|---|---|---|
| OpenAI | GPT-4, GPT-4 Turbo, GPT-3.5 Turbo, and others | openai.com/policies/privacy-policy |
| Anthropic | Claude 3.5 Sonnet, Claude 3 Opus, Claude 3 Haiku, and others | anthropic.com/privacy |
| Gemini Pro, Gemini Ultra, and others | ai.google/responsibility | |
| Custom | Any OpenAI-compatible API endpoint | Varies by provider |
Kind Robots does not endorse, recommend, or guarantee any specific LLM provider. Model availability and naming are subject to change by each provider.
5. Data Flow
5.1 How Data Moves Through the Platform
End-User → Kind Robots Platform → Customer-Selected LLM Provider → Kind Robots Platform → End-User
- End-user sends a message to a chat Agent (via embedded widget or API)
- Platform prepares the prompt by combining the message with:
- The Agent's system prompt (configured by Customer)
- Relevant conversation history
- API response data (if the Agent called external APIs)
- Platform sends the prepared prompt to the Customer's selected LLM provider using the Customer's API key
- LLM provider returns a response to the Platform
- Platform delivers the response to the end-user (with optional custom rendering)
5.2 What Data Is Sent to LLM Providers
- End-user messages and conversation context
- System prompts configured by the Customer
- API response data included in the Agent's context (as defined in Manifests)
- Model parameters (temperature, max tokens, etc.) configured by the Customer
5.3 What Data Is NOT Sent to LLM Providers
- Customer account information (name, email, organization)
- Billing or payment information
- Authentication credentials or passwords
- LLM API keys (keys are used to authenticate requests but are not included in prompt content)
- Other Customers' data
- Platform configuration data unrelated to the specific Agent interaction
- Usage analytics or platform telemetry
6. Third-Party Provider Data Practices
6.1 Transparency Disclosure
Kind Robots does not control how third-party LLM providers process, store, retain, or use data sent to them. Data sent to LLM providers through the Platform is governed by each provider's own terms of service, privacy policy, and data processing practices.
6.2 Provider Policies (as of Last Updated Date)
Customers should review each provider's policies directly. Key considerations:
- Data retention: Providers may retain prompts and responses for varying periods
- Model training: Some providers may use API data for model training unless Customers opt out; policies vary by provider and API tier
- Data location: Providers may process data in different jurisdictions
- Compliance: Providers offer varying levels of compliance certifications (SOC 2, HIPAA, etc.)
6.3 Customer Due Diligence
Customers are responsible for:
- Reviewing and accepting the terms of their chosen LLM provider(s)
- Ensuring their use of LLM providers complies with applicable data protection laws
- Configuring their provider accounts for appropriate data handling (e.g., opting out of training data usage where available)
- Evaluating whether a provider meets their compliance and regulatory requirements
7. No Training on Customer Data
Kind Robots does not use Customer content, chat data, Agent configurations, or any other Customer-provided data to train AI models.
Customer data is processed solely to deliver the Platform's functionality as described in our Terms of Service.
8. AI Safety Measures
8.1 Community Engine — Asimov's Laws
The Kind Robots community engine enforces ethical constraints on autonomous AI agents, inspired by Asimov's Laws of Robotics:
- Do no harm — Agents must not generate outputs that could foreseeably cause harm to individuals
- Follow authorized instructions — Agents must operate within Customer-configured boundaries, except where doing so would cause harm
- Maintain integrity — Agents should preserve their configured purpose and safety constraints
8.2 Governor Agent Concept
Kind Robots is developing Governor agent capabilities for real-time operational monitoring of deployed AI agents, including:
- Drift detection (agents deviating from intended behavior)
- Budget monitoring and overrun alerts
- Policy violation detection
- Automated intervention capabilities
8.3 Platform Safety Features
- Input validation on all API interactions
- Rate limiting to prevent abuse
- Organization-level data isolation
- Encrypted storage of sensitive data (API keys encrypted with AES-256-GCM)
- CORS controls per project for widget embedding
9. Customer Responsibilities
9.1 End-User Disclosure
Customers must clearly disclose to end-users that they are interacting with an AI agent, not a human. This is required by our Acceptable Use Policy and may be required by applicable law.
9.2 Data Sensitivity
Customers are responsible for:
- Evaluating whether their use case involves sensitive data (personal data, health data, financial data, etc.)
- Configuring Agents and system prompts appropriately for their data sensitivity requirements
- Not transmitting data through the Platform that they are legally prohibited from sharing with third-party processors
9.3 LLM Provider Compliance
Customers are responsible for:
- Selecting LLM providers appropriate for their compliance requirements
- Configuring provider-side data handling settings (e.g., disabling training data opt-in)
- Maintaining current awareness of their providers' data practices
9.4 Monitoring
Customers should monitor their Agent interactions for:
- Unexpected or inappropriate outputs
- Data leakage in Agent responses
- Compliance with their organization's AI usage policies
10. Automated Decision-Making
The Platform may be used to build Agents that assist with decision-making, but:
- Agents should not be the sole basis for consequential decisions about individuals without human oversight
- Customers deploying Agents in regulated areas (finance, healthcare, employment) must ensure compliance with applicable regulations governing automated decision-making
- End-users have the right to know when AI is involved in decisions affecting them and to request human review
11. Changes to This Policy
We may update this policy as the Platform evolves and as AI regulation develops. We will provide notice of material changes through the methods described in our Terms of Service.
12. Contact
For questions about AI usage on the Platform:
Kind Robots LLC
Email: ai-transparency@kindrobots.ai
Website: kindrobots.ai
For LLM provider-specific questions, contact your provider directly using the links in Section 4.