Confer Redefines AI Privacy With End To End Encryption
Confer ensures architectural privacy using E2EE and TEE, protecting AI interactions from service providers and unauthorized access.

The prompts we hand over to AI have transcended simple questions to become a digital unconscious containing intimate personal thoughts and core corporate secrets. However, the moment a user presses the Enter key, they have had to rely solely on the service provider’s goodwill regarding where that data flows and whether it is used as training material. Moxie Marlinspike, the founder of Signal, has launched a new challenge to end this era of 'blind trust.' His unveiled 'Confer' introduces end-to-end encryption (E2EE) to the entire AI conversation process, establishing a technical barrier that even the service provider cannot penetrate.
Artificial Intelligence in an Invisible Vault
Confer rejects the 'Policy Privacy' touted by existing AI services and puts 'Architectural Privacy' at the forefront. While most companies reassure users with terms of service stating they "will not use data for training," Confer solves the problem by creating a "structure where data cannot be viewed even if one wanted to." To achieve this, Confer combines the 'Noise Pipes' protocol with a 'Trusted Execution Environment (TEE).'
The core is the Noise handshake performed between the client and the server. When a user sends a message, the data is immediately encrypted, and decryption occurs only within the TEE—a hardware-isolated area within the server. The TEE is an independent computing space inaccessible even to the operating system or cloud administrators. Data decrypted here undergoes the AI model's inference process, is re-encrypted, and is then delivered back to the user. Throughout this process, Confer, the server operator, cannot view or store any data in plaintext.
Currently, Confer supports a number of leading LLM and open-source models, including 'Advanced Models' through paid plans. However, rather than officially listing specific model names (such as GPT-4 or Claude), the service is structured around performance-oriented model groups. Users can securely synchronize their conversation history across multiple devices using Passkeys, and these records also exist on Confer servers only in an encrypted state.
Beyond the Trade-off Between Speed and Security
The process of utilizing end-to-end encryption and TEE inevitably generates additional computational overhead. While minor delays might not be an issue for general chat services, they can be critical in an LLM environment where real-time response is vital. Confer has addressed this head-on through hardware acceleration technology. By optimizing the latency occurring during encryption and decryption at the hardware level, the response speed experienced by the user is maintained at a level similar to that of typical unencrypted AI services.
The industry is paying attention to Confer not just for its security features, but because it signifies a philosophical shift in AI interaction. Until now, companies have implemented security gateway solutions to block internal data from leaking externally. However, these were defensive measures focused strictly on 'leak prevention.' In contrast, Confer enables users to directly verify the integrity of the code running on the server through Remote Attestation technology. In essence, it is technically proven that the data sent undergoes the promised encryption procedures.
Of course, limitations and concerns exist. Specific figures have not yet been released regarding whether the ability of the AI model to maintain conversation context (context window) is affected by encryption overhead when E2EE is applied. Furthermore, additional verification is required to determine what constraints 'Private Inference,' performed in an encrypted state, places on the model's maximum token throughput.
A New Privacy Standard for Enterprises and Individuals
The emergence of Confer offers a new option, particularly for companies in the legal, medical, and financial sectors that are sensitive to data security. Previously, they had to operate on-premise models or go through complex security gateways, but now they can maintain a Zero Trust security model while enjoying the convenience of cloud-based AI.
Developers and security professionals must now ask "What architecture protects our data?" rather than just "Which AI model is the smartest?" Since Confer supports secure synchronization across devices via Passkeys, individual users can also build a conversation environment without security gaps between their smartphones and desktops.
Just as Signal set the standard for privacy in the messenger market, Confer aims to be the new conversation standard for the AI era. As artificial intelligence becomes more like human intelligence, the vessel containing that intelligence must be more robust and transparent. If Marlinspike's new experiment succeeds, we will finally enter an era where we can comfortably whisper to AI, "This is a secret..."
FAQ
Q: Is the response speed slower than existing chatbots when using Confer? A: Although additional computations for end-to-end encryption (E2EE) and TEE processing occur, hardware acceleration maintains speed at a level where real-time response is possible. Specific latency differences in milliseconds have not yet been released as figures, but the perceived speed reduction in typical usage environments is minimized.
Q: What is the specific list of LLM models supported by Confer? A: Confer states that it supports a number of leading LLM and open-source models. However, instead of disclosing specific names of individual models (e.g., GPT-4, Claude 3, etc.) in a list format, they are provided categorized as 'Advanced Models' depending on the plan.
Q: What is the decisive difference between existing corporate security gateways and Confer? A: Existing gateways rely on 'Policy Privacy' to prevent data leaks through policy, whereas Confer provides 'Architectural Privacy' that fundamentally blocks data access technically. In particular, the ability for users to directly verify the integrity of server code through Remote Attestation technology is the greatest technical advantage.
Summary
Confer, unveiled by Moxie Marlinspike, has implemented a Zero Trust architecture where even the service provider cannot view data by introducing end-to-end encryption (E2EE) and a Trusted Execution Environment (TEE) to AI chat. This signifies a shift toward 'Architectural Privacy' that technically enforces privacy beyond simple security policies. Moving forward, the key to its market success will be how Confer balances security performance with the inference efficiency of AI models.
참고 자료
- 🛡️ Moxie Marlinspike's Confer: Bringing Signal-Level Privacy to AI Conversations
- 🛡️ Signal's founder is taking on ChatGPT — here's why the 'truly private AI' can't leak your chats
- 🛡️ Making end-to-end encrypted AI chat feel like logging in | Confer Blog
- 🏛️ Private inference | Confer Blog
- 🏛️ Signal's Founder Built a Chatbot That Can't Spy on You - Time Magazine
- 🏛️ Signal's Founder Built a Chatbot That Can't Spy on You - Time Magazine
- 🏛️ Confer Keeps Coded Data Secret With User Key - Forbes
Get updates
A weekly digest of what actually matters.
Found an issue? Report a correction so we can review and update the post.