Venice’s Privacy-Focused AI Chatbot Won’t Store Your Data, Judge Your Questions

[ad_1]

Is your chatbot censoring you? Venice has a new privacy-focused AI platform that promises not to weigh in on a user’s “morality.”The startup—from crypto exchange Shapeshift founder Erik Voorhees—emerged from stealth on Friday and uses two open-source versions of Meta’s Llama 3 model and Stable Diffusion to provide web-only AI access.In an interview with PCMag, Voorhees says he was inspired to develop his own AI platform after watching tech firms like Google, Microsoft, Anthropic, and OpenAI lock down their models and store user data, including chat records. (OpenAI says its training data includes personal information “incidentally.”)”We don’t know what you’re going to search for, or use it for—we don’t really care. We’re not trying to solve any individual user’s morality,” Voorhees tells PCMag. “We’re just saying, let’s make it easy to access open-source AI, because we think that’s important for humanity to be able to do so that they don’t get locked into some centralized option that is censoring them in ways big and small.”While anyone could theoretically pull the Nous H2P, Dolphin 2.9, or Stable Diffusion models Venice uses from AI data platforms like Hugging Face, Venice is catering to those who care about privacy, adhere to the cypherpunk ethos, or want quick access to AI models without additional filters.”Everything in open source can be seen by anyone. You can validate it yourself. So if you don’t want to trust someone, it’s really the only way to establish what [the] truth is,” Voorhees says.(One thing Venice does keep under wraps is the identity of its CTO; Voorhees says the Venice AI team includes a “potent anon CTO.”)The platform is free, with daily query maximums. There’s no account requirement, but you can create an account or buy a Pro subscription for $4 a month or $49 annually. Pro plans can be purchased with Stripe, which Voorhees says doesn’t share individual identities with Venice. The CEO also says a crypto payment option is coming “soon.”Venice AI’s chat records are stored locally on a user’s device and can be deleted at any time. The company only gains access to your IP address, which can be concealed with a VPN. Accounts can also be created with a junk email address or any anonymous email service. Chats are processed through third-party companies like Akash that offer decentralized GPU computing services with cards and data centers located around the world. 

Recommended by Our Editors

Creating a Venice account lets users kill its adult content block or “Safe Venice” mode, which is among the few types of content it does censor. Despite broader concerns about the proliferation of nude AI deepfakes online and questions of how much responsibility platforms should bear for non-consensual nudity, porn, or possibly sexist or objectifying outputs, it’s possible to prompt NSFW responses with Venice. (OpenAI recently pushed back on reports that it was investigating how to allow AI-generated porn.)Venice COO Teana Baker-Taylor emphasizes that the Safe Venice feature exists for a reason. But when it comes to the possibility of minors seeing explicit content, Baker-Taylor says “it’s not Venice’s job to parent the children of the world.”Venice’s AI tools won’t respond to queries about guns or weapons, however, even if users upgrade to the Pro version. And because Venice pulls from Meta’s open-source models at time of launch, it’s also subject to any pre-existing tendencies within Llama 3 itself. 

Get Our Best Stories!
Sign up for What’s New Now to get our top stories delivered to your inbox every morning.

This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.

[ad_2]

We will be happy to hear your thoughts

Leave a reply

Megaclicknshop
Logo
Compare items
  • Total (0)
Compare
0
Shopping cart