Commit to remaining free of any and all AI tools
Proton, so far, has been a bastion of "the Internet we actually wanted". This is why it was so disheartening to see that the company is now seriously considering integrating GenAI into its suite of tools.
As the tide of big tech moves toward forcing AI tools into the face of every user, the existence of a platform that is explicitly not doing this is a breath of fresh air. There is no reason to "innovate for the sake of innovation". Let your competitors waste their time and energy being late to the party with the 80th, 90th, and 100th AI tools to come to market.
Beyond the user experience, there is a plethora of moral reasons to avoid this AI plague:
- The mass exploitation of human labour used to tag inputs for these models
- The environmental effects of generating power used to train them
- The obscene amount of fresh drinking water diverted toward cooling
- The degradation of information quality, as the human knowledge encoded in language is substituted with statistical approximations
Please, reconsider this. It's been refreshing to use a platform that wasn't mindlessly chasing big tech trends, and to lose that now would give me an extremely disappointing reason to go searching once again for alternatives.
-
RH commented
People seem to think Proton are going to FORCE the use of AI on to customers. If it's optional (like the Writing Assistant already is!) then you take it or leave it. No need for a generalized pro/anti-AI discussion here.
-
nycki93 commented
I decided to give Proton a chance despite some initial misgivings because "at least it's not AI". Don't betray your early adopters like this.
-
Ananana commented
Unrealistic, short-sighted and muh feelings-pilled. Implementing this would serve no purpose for the end-user.
-
Ben Anderson commented
"The mass exploitation of human labour used to tag inputs for these models"
Getting paid to do a job you applied for is not exploitation. There are no slaves involved. Everyone is a willing participant who is being paid for their work and can quit at any time.
"The environmental effects of generating power used to train them"
Oh man, you should see what it takes to keep the Proton servers online. Training an AI model is one-and-done, but making your email work requires constant power generation.
"The obscene amount of fresh drinking water diverted toward cooling"
None? Liquid-cooled computers use closed cooling systems that recycle the same coolant (which isn't even water). Someone lied to you about how this works.
"The degradation of information quality, as the human knowledge encoded in language is substituted with statistical approximations"
If you don't find it useful, don't use it.
-
Shiva commented
The thing is generative AI CAN be made ethically, it's that all the other tech companies don't care about anything but money. Proton did this with Proton Scribe, the blog articles say that it:
1. is trained on a model developed by Proton, so it's (a) not exploiting the work of others and (b) neither giving us misinformation from the internet nor putting more out there.
2. does not save or use input for training and is opt-in only, so doesn't collect any data from us.
3. is only in Proton Mail and can run on your device. It needs 8GB of RAM if run on your device which speaks how much power it needs, but I don't think one person would use a writing AI 1000 times a day like idiots do with other types of genAIs, so the environmental impact shouldn't be much more than any regular device user. GenAI for things such as images and videos are the big polluters here because users generate whole new images/videos/whatever else exists now instead of making small edits, which shouldn't really be an issue with a writing AI since it's just text.Scribe is a generative AI, but it checks all the boxes for an ethically-made one in my opinion. If Proton HAS to put more AI in its services, as long as they go about it the same way, I think it's fine. They already did with Sentinel, too, which is another thing- not all AI is generative. GenAI as we know it is a very recent concept.
I'm never using Scribe or any genAI, but at the very least, I trust Proton to do it ethically, however mainstream it is.
Proton responded to this post by the way, I suggest everyone read it in the previous comments.
-
LG commented
Everyone acting coy saying "but AI is useful for SPAM!" like they don't know OP means the garbage generators currently being hyped by everyone, including Proton. 🙄
-
bha commented
I support the use of open models and that is exactly what is needed to avoid largest corporations from monopolizing the space and data. While the use of specific applications is debatable, the long term existence of this and similar technology is undeniable and open source is now more important than ever. As long as these "tools" can be toggled off in the Proton suite I also don't see a problem.
I would say that things like the AI writing assistant are more an applied experiment than anything else for the time being.
If you talk about energy consumption, you need to consider the amout of energy that was and will be saved through optimizations made by AI models to power and resource management. -
Citizen commented
I understand your concerns, but this are different ideological concerns that the ones that unite us here.
Proton already uses AI for stuff like spam filtering, it's something we all use (even you) in some form in or another and isn't really that new.
I also think that you don't really understand how AI works. If you have a good computer, you could right now run a local model yourself. You also don't need human labor for many of these things, and many models are today open-source, with contributions from the broader FOSS community.
They communicated in one of their blogposts that a large majority of proton's users wanted to have access to AI from proton after their survey. So I think it will move in that direction anyways.
-
Anon commented
Foolish... For those who don't understand how AI, Deep Learning, Machine Learning, LLMs, and transformers work... it's just a bunch small math concepts combined to be flexible enough to "fit" data.
Like a multidimensional mold.
That's all.. its not good or evil... it's the company that uses it.
The danger is it is such a good mold, it catches EVERYTHING... including any human bias that is in the data... and there is always human bias in the data.
It is powerful and the world needs companies like proton to use it.
-
Dylan commented
The mental gymnastics of people who equate large energy usage with environmental impact is so outdated. Guys, it's how the energy is produced not the fact that you've used a lot of energy. You gotta change your mindset, it's backwards thinking, energy abundance is what the goal is, not energy scarcity.
-
BillyBobMud commented
The environmental inpact is all the reason I need to avoid AI. Back in June I read that an AI search engine uses 5 times the power of a traditional search. It's why I don't use Brave search.
-
Matt commented
I think sadly this is just the inevitable. Like if they don't do it a rival will. Same worried happened in the industrial revolution.
I think they've been smart by using Mistral and also keeping it small and functional.
-
Draken commented
Artificial "intelligence" needs to go the way of the phonograph.
-
Jay commented
The only use I'd support AI for on Proton services would only be checking the metadata for potential spam. I oppose using the AI for checking any contents of email messages (body or subject). I'm sure this is already in place though.
AI similarly to Google autocorrect or predict for writing might be useful for writing emails. However, this should be opt in, not forced, and even at that I might consider it invasion off privacy since the goal off ProtonMail is to encrypt the content so that it isn't seen.
Everything is insecure nowadays. So, the more you expand, more security risks. Even if there are 20 audits done by security professionals. Mistakes happen, websites gets hacked all the time that were once considered secure.
So I agree with the author. It is too much of a risk to have it. If you must implement it. Limit it so it's opt in rather than forced. And if there's AI implemented, make it as minimal as you can so it isn't so bloated that it's harder to secure.
I oppose most AI implementation here.
-
Thomas commented
Also concerning is the environmental impact of AI tools. AI is great for specific applications, however 9/10 times I've seen it used as of late, it's to capitalize on a buzzword with an application that is not at all improved through the use if AI.
-
Jeff Schmidt commented
Seems like this ship has sailed, and so will my subscription.
-
Clayton Decker commented
AI is inevitable, and it carries tremendous risk; however it can be implemented correctly. Take a look at Brave Leo for inspiration.
-
Jo commented
Here are some things I wanted to add:
-The environmental effects of generating power used to train them:
Depends. A lot of data centers are run on renewable energy sources or on excess energy production. So there's an negligible effect on the climate if that's the case here.-The obscene amount of fresh drinking water diverted toward cooling:
Unproblematic, since the water is going to be send back into the river with +1 or +2 degrees more. The water doesn't get consumed, just briefly used.-The degradation of information quality, as the human knowledge encoded in language is substituted with statistical approximations:
Generally problematic, but user dependent. The problem that you describe arises only if people accept the information provided by genAI as absolute truth or don't know it was generated by AI. If critical thinking is applied to the generated information, the problem isn't much different from trusting something on the internet.
Everyone is using autocorrect on mobile phones, but no one is seriously just tapping on the next generated word and expecting an viable answer.
GenAI is the same, but it can hide it better. -
Tully commented
I find it incredibly amusing that every pro-AI comment in this thread ignores all four of the points I made in this suggestion.
I would suggest that everybody making arguments that amount to "AI is here to stay" carefully consider what happened with the "crypto is here to stay" crowd once it become clear that the only winners in that space were the ones selling mining chips.
-
Ae commented
My issue is this: before putting efforts on AI, basic functions should be prioritised, like offline mode for mail and calendar, sorting options for mail, contact and event search for mobile, just to name some.
AI doesn't make sense when such basic functions are lacking.