Data Privacy Guide
CodingFleet Data Privacy Guide
Privacy Settings on CodingFleet
CodingFleet offers two key privacy settings to give you more control over your data:
- Privacy-Focused AI Models: When this setting is enabled, the model selector will only show AI models that are “privacy-respecting” – in other words, models whose providers state they do not use your prompts or data to train their future models. This helps you choose models that prioritize user privacy. (Note: This relies on the model providers’ policies and is not a 100% guaranteed safeguard, but it filters out models known to use prompt data for training.)
- Private Session Mode: Enabling this mode means none of your usage data for that session is stored in CodingFleet’s database. No new chat history or prompts from that session are saved on our servers. Once you close or refresh the page, all session data is gone. This provides an “incognito” experience for your usage. (Existing history saved before the private session mode is enabled remains unaffected – private mode only prevents storing new data in that session.)
These settings can be found in your Account & Privacy settings once you log in. By combining them, you can ensure you’re using models with strong privacy stances and not leaving any trace of your session on our platform.
Data Privacy Practices of Our AI Providers
At the time of writing this, CodingFleet integrates AI models from 8 different providers. Each provider has its own privacy policy regarding how they handle your prompts, code, and other data. Below is a summary of the relevant privacy practices for each provider (covering 20+ models in total). We highlight whether your prompts are used for training, how long data is retained, and data handling measures like encryption or regional storage. This will help you understand what happens to your data when you use a given model:
OpenAI (e.g. GPT and o-series models) (source)
- Use of Prompts for Training: OpenAI does not use data submitted via their API to train their models by default. Since March 1, 2023, any prompts and outputs you send through OpenAI’s API (which CodingFleet uses) are kept private from OpenAI’s model training – they will not be fed into improving future models.
- Data Retention: OpenAI may retain API request data (your prompts and the AI’s replies) for up to 30 days on their servers, solely to monitor for abuse or misuse. After 30 days, it is automatically deleted. For extra-sensitive use cases, OpenAI offers a “zero data retention” option on certain endpoints by request, which means they won’t store even the 30-day temporary logs.
- Data Handling and Security: All data sent to OpenAI is encrypted in transit and at rest. OpenAI’s systems use TLS 1.2+ for data in transit, and encrypt stored data with AES-256. OpenAI is a US-based provider (data is processed in the U.S.), but they comply with GDPR through a Data Processing Addendum (DPA) for clients and have achieved certifications like SOC 2.
Anthropic (Claude models)
- Use of Prompts for Training: Anthropic’s policy for Claude is that it will not use your API inputs or outputs to train its models by default. In other words, prompts you send to Claude models are not used to improve Claude’s future behavior, unless you explicitly opt in. (Anthropic only uses data you intentionally provide as feedback for training, or if you join a special program.) Source.
- Data Retention: Anthropic automatically deletes prompts and outputs after 30 days from their systems. The exception is if data triggers a policy violation: if your prompt is flagged by their safety system as breaking the rules, they may retain that prompt and response for up to 2 years to improve their filters and for legal compliance. Source.
- Data Handling: Anthropic is U.S.-based. They emphasize security: data is encrypted at rest and in transit (AES-256 and TLS 1.2+ are used, according to Anthropic’s trust center) and they can sign DPAs for GDPR. Prompt data is processed in the cloud (Anthropic uses secure cloud infrastructure, reportedly in the US), and only authorized personnel can access it for troubleshooting or abuse prevention. Overall, Claude is designed with privacy in mind – similar to OpenAI – for its API users.
Google (Gemini models via Vertex AI API)
- Use of Prompts for Training: Google Cloud’s policy is clear that it will not use your content to train or improve Google’s own models without your permission. All prompts and data sent to the Vertex AI Gemini services remain your data. Google’s AI/ML Privacy Commitment states customer data is never used to train Google’s foundation models or anyone else’s model unless you explicitly opt in.
- Data Retention: By default, Google’s Vertex AI may cache your prompts and the model’s responses for a short period (up to 24 hours). This caching is done to speed up repeat requests and improve performance for you (for example, if you send the same prompt twice, caching can return the result faster). The cached data is stored in the region where the request was served (e.g., EU region if you use a European endpoint) and is automatically purged within 24 hours. Aside from caching, Google Cloud may also log prompts for abuse monitoring (to detect misuse such as prohibited content). Important: If you are a paying Google Cloud customer (with an invoiced account), Google does not log your prompts even for abuse detection by default – logging is mainly for certain free-tier users. In any case, those logs (if they occur) are used only internally and not for training.
- Data Residency and Security: All data is encrypted by Google Cloud (both in transit and at rest, as with all GCP services). Google Cloud is fully compliant with GDPR and offers DPAs. In short, using Google’s Gemini via Vertex AI is enterprise-grade: your data stays under your control, and isn’t used to train Google’s models.
Meta LLaMA & Qwen Models (via Together AI platform)
- Use of Prompts for Training: CodingFleet uses Meta’s LLaMA-based and Qwen models through the Together AI platform. Together’s policy is that it does not use any of your prompt or response data to train models unless you explicitly opt in. So, when you use LLaMA or Qwen, your prompts are not used to fine-tune Meta’s models. (Meta’s own open-source LLaMA models or Alibaba Cloud's Qwen, don’t phone home anyway; the key is that Together, which hosts the model, isn’t learning from your data either without consent.)
- Data Retention: By default, Together may keep some logs of requests to monitor service health, but they allow an explicit privacy setting that retains no prompts or responses. We have configured our use of Together to prioritize privacy. Essentially, your prompts to LLaMA via Together are not stored beyond the immediate processing unless necessary for debugging. Together’s platform even lets users “disable retention” so that prompts and outputs are not saved for any purpose.
- Opt-Out: The default on Together is already privacy-friendly, but if any logging were in place, it can be turned off in settings. CodingFleet’s integration opts for the highest privacy mode with Together, meaning we instruct Together not to retain your data. There’s no need for you to take additional action.
- Data Handling: Together is a third-party service that runs these models on cloud servers (likely in the US or EU). They mention that you can request deletion of any collected info and adjust privacy settings. Data in transit to Together is encrypted (HTTPS). In summary, using LLaMA through Together is designed to be as private as using OpenAI or Anthropic – no training on your data, and no persistent storage of your prompts.
DeepSeek
- Use of Prompts for Training: DeepSeek is an AI provider that does use user data for training and improving its models. According to DeepSeek’s privacy policy, your inputs (prompts, code, etc.) and the AI’s outputs can be analyzed and utilized to enhance their machine learning models. This is an important difference from the providers above. In practice, that means anything you send to a DeepSeek model might later influence how the model behaves for others (they learn from your usage).
- Data Retention and Location: Data from DeepSeek sessions is stored on DeepSeek’s servers in China (People’s Republic of China), since DeepSeek is based in Hangzhou, China. The data (your prompts, and AI answer) is subject to Chinese data regulations. DeepSeek’s policy indicates they keep your data for as long as it’s needed to operate and improve the service.
- Opt-Out: DeepSeek does not offer a user-level opt-out of data usage for training. By using their model, you implicitly allow them to use your interactions to improve the AI. (The only way to prevent that would be not to use DeepSeek or possibly to contact them to delete your data.) There’s also no known “privacy mode” for DeepSeek’s API – it logs and uses data by design.
- Data Handling: All communications with DeepSeek still go through CodingFleet’s secure connection, but once at DeepSeek, the data is handled under their policy. They likely have standard security measures (encryption in transit to their servers, etc.), but the main concern for users is lack of privacy guarantees. Important: If you are concerned about data privacy, you may want to avoid using DeepSeek models or ensure you do not input sensitive code/data when using them. CodingFleet’s “Privacy-Focused AI Models” filter will exclude DeepSeek when enabled, specifically because DeepSeek’s model is not privacy-focused (it trains on your prompts).
xAI (Grok models via OpenRouter) (source)
- Use of Prompts for Training: xAI’s flagship model Grok is integrated into CodingFleet through OpenRouter. By default, xAI does not use your API prompts or outputs to train its model without your explicit permission. In fact, xAI has an opt-in program: you can choose to share your API data to help them improve Grok (and they incentivize this with free credits), but if you do nothing, your data remains private and is not used for model training. We do not opt in to data sharing on your behalf.
- Data Retention: xAI’s API will store your requests and the model’s responses for 30 days on their servers. This temporary storage is for service functionality and troubleshooting. After 30 days, the data is deleted. If you decide to opt into their data-sharing program, your data might be kept longer (to be used in training feedback loops), but again that would only happen if you explicitly agree.
- Opt-In/Oversight: By default, you are opted-out of any training use. xAI gives an account setting to opt in to “share data with xAI”. CodingFleet do not enable it. You can use xAI’s Grok via our platform with confidence that it’s not learning from your specific conversations. Should you ever use Grok through other means (like the X platform or xAI’s own app), be aware those might have different terms (e.g., using Grok on Twitter/X might fall under Twitter’s privacy policy).
- Data Handling: xAI is a US-based company, and Grok’s API runs in the cloud (likely in US data centers). They will have encryption in transit (HTTPS) and standard cloud security. If you’re an EU user, note that your data goes to the US in this case, but xAI has a privacy policy and GDPR addendum to protect users. They also allow data deletion requests – if you ask to delete your data, they will remove it (they state it may take up to 30 days to fully delete from backups). Overall, xAI’s approach is similar to OpenAI’s: no training on customer data by default, and limited retention.
Mistral AI
- Use of Prompts for Training: Mistral AI (a European AI company) provides models (like Codestral) via an API. Mistral does not use your prompts or outputs from their API to train their models. Their Terms of Use explicitly state: “We do not use Your Prompts and/or Your Outputs to train our model(s).” So your interactions remain completely separate from their model training process. Source.
- Data Retention: Mistral’s API keeps only minimal logs for 30 days for auditing and support. Specifically, they store API call logs (which likely include metadata and possibly the prompts) for up to 30 days, after which those logs are automatically deleted. They do not use even those logs for any model training or improvement – it’s purely for compliance/audit purposes.
- Data Handling and Location: One advantage for privacy: Mistral’s servers are located in Europe (specifically in Sweden). That means any data you send to Mistral’s models stays in the EU. This is good for GDPR and data sovereignty (your data isn’t transferring to the US or elsewhere). Additionally, Mistral encrypts all data at rest and in transit (AES-256 for storage and TLS 1.2+ for network). Mistral, being based in France, is fully GDPR compliant. In summary, Mistral AI’s services are very privacy-conscious – no training on your data and strong security in an EU environment.
Where CodingFleet Stores Your Data (DigitalOcean FRA1)
Aside from the AI providers, it’s important to know how CodingFleet itself handles your usage data (chat histories, etc.). Our platform stores user data on DigitalOcean, a cloud hosting provider. Specifically, our servers and database are in the FRA1 data center – which is in Frankfurt, Germany.
What this means for you: All your data on CodingFleet (account info, usage data, preferences) is stored in the EU (under German jurisdiction). This aids in compliance with GDPR and European data privacy requirements, because your personal data isn’t being exported outside the EU. We chose FRA1 intentionally to ensure EU users have their data kept in Europe.
DigitalOcean’s privacy and security practices: DigitalOcean is a reputable cloud host that is committed to user privacy. They are compliant with GDPR – they even provide a Data Processing Agreement and were part of the EU-US Privacy Shield (now they use standard contractual clauses for data transfer). Data stored on DigitalOcean’s servers is encrypted at rest using AES encryption, and all data in transit to/from our servers is protected with TLS encryption. In other words, your data in our database is not sitting there in plain text, and any communication between you and CodingFleet is encrypted.
DigitalOcean’s FRA1 facility adheres to high security standards (physical security, 24/7 monitoring, etc.). As a processor, DigitalOcean only accesses your data as needed to maintain the service (for example, backups or hardware maintenance) and follows our instructions under the DPA. They do not use your data for any purpose other than to keep our service running. Source.
In summary, CodingFleet stores your data securely in Germany. We leverage DigitalOcean’s robust infrastructure to ensure your data’s safety. No matter which AI model you use, your chat history and account info remain on our EU-based server. We do not send your account or history data to any third-party except the model prompt you explicitly submit (which goes only to the model you chose, as described above). And if you use Private Session Mode, we don’t even keep that data on DigitalOcean beyond your session.
We hope this provides a clear overview of how data privacy is handled on CodingFleet. Our goal is to give developers, students, and teachers the powerful AI assistance they need without sacrificing privacy. You can further maximize privacy by combining the Privacy-Focused Models filter (to avoid providers who train on user data) with Private Session Mode (to avoid storing anything on our side). We will continue to be transparent about data practices and update this guide if policies change.
Happy coding – safely and securely!
Last Updated: may 2025