Confer AI Redefines Privacy Through Confidential Computing and Hardware
Confer AI secures user data using hardware encryption, challenging the industry's surveillance capitalism and policies.

The era where your conversations are analyzed in real-time to become ingredients for Large Language Models or bait for personalized advertisements may finally be coming to an end. Moxie Marlinspike, the founder of Signal, has made a strategic move in the Generative AI market that is not a mere feature update, but a complete restoration of the 'right to know' and the 'right to be forgotten.' The security-focused AI service 'Confer' brings the convenience of ChatGPT while successfully confining the entire process—from the moment user data reaches the server—within a fortress of encryption.
A Declaration of War Against Silicon Valley's Surveillance Capitalism
Confer's emergence directly challenges the data collection policies maintained by existing commercial LLM (Large Language Model) developers. While most AI services utilize user conversation data for model training or as marketing metrics under the guise of anonymization, Confer has adopted a 'non-training' policy that blocks this at the source. This goes beyond simply stating "we will not train on your data" in the terms of service. Confer employs a method that technically proves the integrity of data from its creation to its destruction.
Most notable is the price tag. Confer has set a rather substantial subscription fee of $34.99 per month. Instead of providing the service for free and monetizing data behind the scenes, the strategy is to cover infrastructure costs through a transparent subscription model. This is the cost of maintaining a 'clean AI environment' that excludes advertising and data tracking, specifically targeting professionals and the corporate market where privacy is directly linked to survival.
Technical Fortress: Security Guaranteed by Hardware
The way Confer guarantees non-training of data resembles a bank vault system. The core is the combination of End-to-End Encryption (E2EE) and Confidential Computing. When a user enters a query, the data is encrypted on the device via a Passkey before being transmitted. Neither server operators nor engineers have any way to decrypt this ciphertext.
The only place where the data is decrypted is inside a hardware-based 'Trusted Execution Environment (TEE),' or Secure Enclave. Only within this isolated space is the data temporarily decrypted to perform AI inference; once the output is generated, it is re-encrypted and sent back to the user. The data evaporates as soon as the task is completed. Confer has built upon this by using open-source AI models and applying Remote Attestation technology, providing a mechanism for users to personally verify if the system is truly adhering to the promised security rules.
This is a paradigm shift from the past method of having to trust a company's promise that "we do not look at your data" to a technical enforcement of "we are structured such that we cannot look at your data."
Analysis: Is Expensive Security Worth It?
The industry is watching Confer's challenge with interest. While existing AI companies rely on policy-based anonymization, Confer has achieved security through the physical constraints of hardware. However, there are clear hurdles to overcome.
The first is the cost barrier. A monthly fee of $34.99 is a burdensome amount for the general public. While it is an unavoidable choice to maintain high-cost confidential computing infrastructure, it remains uncertain whether this price point can expand into the individual user base. However, in fields such as law, medicine, and finance, where even a single line of data leakage can cause fatal damage, this cost is likely to be perceived as a relatively inexpensive 'insurance premium.'
The second is performance uncertainty. Confer is based on open-source models for security reasons, but it has not yet clearly disclosed which models or versions are being used. The key will be how Confer's intelligence compares to top-tier closed models like ChatGPT (GPT-4o) or Claude 3.5 Sonnet. If the security is perfect but the answers are unsatisfactory, it will inevitably be ignored by the market.
Practical Application: Who Should Use It and How?
If you are a lawyer handling sensitive client information or a researcher dealing with drug development data, Confer becomes an excellent alternative. Previously, the use of chatbots might have been taboo due to security regulations, but Confer's enclave structure blocks data readability at the source, providing freedom in terms of compliance.
The most specific action users can take now is to classify what constitutes 'data that must never be leaked' within their own workflow. While one may not need to pay $34.99 for every conversation, utilizing Confer's secure interface when drafting confidential projects or analyzing complex contracts is a highly attractive scenario. In particular, the device authentication method via Passkeys provides a much more robust security environment than traditional, vulnerable password systems.
FAQ
Q: How is this different from ChatGPT's 'Temporary Chat' or 'Incognito Mode'? A: Privacy modes in existing services are software settings and policy promises. This means the operator can view the data if they choose to (or if they are hacked). Conversely, since Confer processes data only within hardware enclaves, even the operator's employees cannot physically check the content of the data.
Q: Are there corporate group discounts or separate licensing policies besides the $34.99 monthly fee? A: The revenue model disclosed so far is centered on a monthly subscription common to both individuals and corporations. Specific corporate volume licensing figures or group discount policies have not yet been confirmed, but they are likely to be added in the future as the service targets specialized markets.
Q: Which AI model does Confer use internally? A: Confer has stated that it is based on open-source AI models, but it has not disclosed specific model names (such as Llama 3, Mistral, etc.) or versions. Additionally, the specific method of providing technical tools for users to perform remote attestation themselves is yet to be confirmed.
Transitioning to an Era Where Security is the Default for AI
The launch of Confer is a signal that the Generative AI industry has moved past the stage of 'speed and performance' into the stage of 'trust and safety.' Just as Signal set the standard for security in the messenger market, Moxie Marlinspike aims to transplant that same standard into the AI market through Confer.
Despite the limitations of high subscription fees and the lack of specific model information, the 'privacy guaranteed by technology' presented by Confer will be the most powerful option for users who wish to reclaim their data sovereignty. Moving forward, what we must watch is whether such hardware-based security models can become mainstream enough to lower subscription costs and whether they can secure intelligence levels comparable to closed models.
참고 자료
- 🛡️ Confer: Signal's Founder Builds an AI Chatbot That Can't Spy on You
- 🛡️ Confer: Signal's Founder Builds an AI Chatbot That Can't Spy on You
- 🏛️ Confer: Signal's Founder Builds an AI Chatbot That Can't Spy on You - Measured Collective
- 🏛️ Signal Founder Moxie Marlinspike Launches Encrypted AI Assistant Confer
- 🏛️ Signal's founder is taking on ChatGPT — here's why the 'truly private AI' can't leak your chats
Get updates
A weekly digest of what actually matters.
Found an issue? Report a correction so we can review and update the post.