Anonymous
My feedback
80 results found
-
6 votes
Anonymous
supported this idea
·
-
41 votes
Anonymous
supported this idea
·
-
2 votes
Anonymous
shared this idea
·
-
643 votes
Anonymous
supported this idea
·
-
3 votes
Anonymous
shared this idea
·
-
117 votes
Anonymous
supported this idea
·
-
2 votes
Anonymous
shared this idea
·
-
11 votes
Anonymous
supported this idea
·
-
327 votes
Anonymous
supported this idea
·
-
2 votes
Anonymous
shared this idea
·
-
391 votes
An error occurred while saving the comment -
267 votes
Anonymous
supported this idea
·
-
193 votes
An error occurred while saving the comment
Anonymous
commented
Y'all are basically asking proton to increase the price of the unlimited plan- Running a GPU farm isn't cheap.
An error occurred while saving the comment
Anonymous
commented
Objection your honor, the words predicting machine (LLM) said that company is naughty!
AI simply agrees with anything if you push it hard enough, unless it is specifically trained not to. everyone knows that.
-
9 votes
Anonymous
shared this idea
·
-
4 votes
Anonymous
shared this idea
·
-
943 votes
Anonymous
supported this idea
·
-
3 votes
Anonymous
shared this idea
·
-
2 votes
Anonymous
shared this idea
·
-
165 votes
An error occurred while saving the comment
Anonymous
commented
I like the idea but I don’t think proton will do it, at least not anytime soon- its unrealistic.
Proton isn’t AI provider, they just have it. Just like how duckduckgo isn’t an AI platform but they do have it.
Note: if you actually have a use for AI API self-host any model you like on a vps with GPU. Cheaper if you don’t exhaust resources and private. You’ll also be able to host way better models than Proton’s.
Anonymous
supported this idea
·
-
30 votes
Anonymous
supported this idea
·
Proton, if you're going to implement this; Please give us the option to disable it, or allow/decline sending data per request with specific requirement.
I do not like the idea of my E2EE ecosystem sending my unencrypted data to an LLM just cause it misunderstood me.