Commit to remaining free of any and all AI tools
Proton, so far, has been a bastion of "the Internet we actually wanted". This is why it was so disheartening to see that the company is now seriously considering integrating GenAI into its suite of tools.
As the tide of big tech moves toward forcing AI tools into the face of every user, the existence of a platform that is explicitly not doing this is a breath of fresh air. There is no reason to "innovate for the sake of innovation". Let your competitors waste their time and energy being late to the party with the 80th, 90th, and 100th AI tools to come to market.
Beyond the user experience, there is a plethora of moral reasons to avoid this AI plague:
- The mass exploitation of human labour used to tag inputs for these models
- The environmental effects of generating power used to train them
- The obscene amount of fresh drinking water diverted toward cooling
- The degradation of information quality, as the human knowledge encoded in language is substituted with statistical approximations
Please, reconsider this. It's been refreshing to use a platform that wasn't mindlessly chasing big tech trends, and to lose that now would give me an extremely disappointing reason to go searching once again for alternatives.
-
bha commented
I support the use of open models and that is exactly what is needed to avoid largest corporations from monopolizing the space and data. While the use of specific applications is debatable, the long term existence of this and similar technology is undeniable and open source is now more important than ever. As long as these "tools" can be toggled off in the Proton suite I also don't see a problem.
I would say that things like the AI writing assistant are more an applied experiment than anything else for the time being.
If you talk about energy consumption, you need to consider the amout of energy that was and will be saved through optimizations made by AI models to power and resource management. -
ϻя.ƹ commented
Hello,
I hope this message finds you well, healthy, and most importantly ... Happy!
I was born with the burden of a 172 2 deg sep mind, and in my 50+ years I've learned some very important truths; I'm smart enough to know I'll never be smart enough, and wise enough to know I'll never know everything. After reading some passionate opinions here on both sides of this, no matter the expertise touted, it would take thousands of hours of research over decades in the fields on BOTH sides of the argument here to accurately, and as definitively as so many expressed, be much more than conjecture. Some of the folks here are clearly well versed in one, AI or environmental activism, but not in both. While there were many facts presented about the persons making the statements and their personal knowledge of one or the other, there isn't much actual data to support the arguments, so as an impartial fella with no real "dog in this fight" just observing, I'd have to say, from the proton perspective, there isn't much to compel me either way. I can say, I really kinda dig some of the AI I've goofed around with. I made a "script" that parsed, organized, and accurately renamed over 5000 .pdf and image documents, that only had numbers for file names I had for a project and the last coding I did was when a 56k dial-up was an upgrade, so that was pretty Rad and now my kids think I'm Elliot Alderson. Then again, I dig air and water much, MUCH more and starving people around the world are burning more trees than is sustainable. Those are facts, and still not very compelling to leadership, I'm sure. Maybe just enough to "fact check" me on the latter ... it'll be a first page G-word results search though to confirm ... personally I dig Leo best, but ¯\_(ツ)_/¯ My point here is that, all I said here, talking about myself clearly has no impact on the debate presented, even if 99.9% of people here, according to stats, not me, aren't as capable of learning as I am. I presented details about what I knew, said others don't know as much, and offered some irrelevant data, which is what basically this entire conversation was. I just knew I was doing that, but sometimes it's SO much easier to see from the outside. I don't fault ANYONE for being passionate about something EVER, if anything I was proud of you all for giving a **** either way!!! Truly! However, when we can use that to start addressing issues like this from a tack of coming together and figuring out collectively how "to use AI here that will help to save the environment AND do important email stuff AND sustain our privacy expectations" then together we'll get places like Proton to not only listen, but do what the heck we say ... how could they not? That is where your power is ... together, and in a time in history when so many feel so helpless, the way we've all been conditioned to feel, it's a dang good thing to know ... that's JUST NOT true. There's a bunch of smart folks here, I'd bet y'all could do it ... I sure wouldn't bet against this group. Find just a bit of the right in each other instead of all the wrong and this group here would be truly unstoppable!
Thank you all very much in advance for your valuable time, I sincerely appreciate YOU!Much love!!! ✌
.ƹ."Without ART, the eARTh would just be 'eh'!"
ოΓ. ǝ -
Citizen commented
I understand your concerns, but this are different ideological concerns that the ones that unite us here.
Proton already uses AI for stuff like spam filtering, it's something we all use (even you) in some form in or another and isn't really that new.
I also think that you don't really understand how AI works. If you have a good computer, you could right now run a local model yourself. You also don't need human labor for many of these things, and many models are today open-source, with contributions from the broader FOSS community.
They communicated in one of their blogposts that a large majority of proton's users wanted to have access to AI from proton after their survey. So I think it will move in that direction anyways.
-
Anon commented
Foolish... For those who don't understand how AI, Deep Learning, Machine Learning, LLMs, and transformers work... it's just a bunch small math concepts combined to be flexible enough to "fit" data.
Like a multidimensional mold.
That's all.. its not good or evil... it's the company that uses it.
The danger is it is such a good mold, it catches EVERYTHING... including any human bias that is in the data... and there is always human bias in the data.
It is powerful and the world needs companies like proton to use it.
-
Dylan commented
The mental gymnastics of people who equate large energy usage with environmental impact is so outdated. Guys, it's how the energy is produced not the fact that you've used a lot of energy. You gotta change your mindset, it's backwards thinking, energy abundance is what the goal is, not energy scarcity.
-
BillyBobMud commented
The environmental inpact is all the reason I need to avoid AI. Back in June I read that an AI search engine uses 5 times the power of a traditional search. It's why I don't use Brave search.
-
Matt commented
I think sadly this is just the inevitable. Like if they don't do it a rival will. Same worried happened in the industrial revolution.
I think they've been smart by using Mistral and also keeping it small and functional.
-
Draken commented
Artificial "intelligence" needs to go the way of the phonograph.
-
Jay commented
The only use I'd support AI for on Proton services would only be checking the metadata for potential spam. I oppose using the AI for checking any contents of email messages (body or subject). I'm sure this is already in place though.
AI similarly to Google autocorrect or predict for writing might be useful for writing emails. However, this should be opt in, not forced, and even at that I might consider it invasion off privacy since the goal off ProtonMail is to encrypt the content so that it isn't seen.
Everything is insecure nowadays. So, the more you expand, more security risks. Even if there are 20 audits done by security professionals. Mistakes happen, websites gets hacked all the time that were once considered secure.
So I agree with the author. It is too much of a risk to have it. If you must implement it. Limit it so it's opt in rather than forced. And if there's AI implemented, make it as minimal as you can so it isn't so bloated that it's harder to secure.
I oppose most AI implementation here.
-
Thomas commented
Also concerning is the environmental impact of AI tools. AI is great for specific applications, however 9/10 times I've seen it used as of late, it's to capitalize on a buzzword with an application that is not at all improved through the use if AI.
-
Jeff Schmidt commented
Seems like this ship has sailed, and so will my subscription.
-
Clayton Decker commented
AI is inevitable, and it carries tremendous risk; however it can be implemented correctly. Take a look at Brave Leo for inspiration.
-
Jo commented
Here are some things I wanted to add:
-The environmental effects of generating power used to train them:
Depends. A lot of data centers are run on renewable energy sources or on excess energy production. So there's an negligible effect on the climate if that's the case here.-The obscene amount of fresh drinking water diverted toward cooling:
Unproblematic, since the water is going to be send back into the river with +1 or +2 degrees more. The water doesn't get consumed, just briefly used.-The degradation of information quality, as the human knowledge encoded in language is substituted with statistical approximations:
Generally problematic, but user dependent. The problem that you describe arises only if people accept the information provided by genAI as absolute truth or don't know it was generated by AI. If critical thinking is applied to the generated information, the problem isn't much different from trusting something on the internet.
Everyone is using autocorrect on mobile phones, but no one is seriously just tapping on the next generated word and expecting an viable answer.
GenAI is the same, but it can hide it better. -
Tully commented
I find it incredibly amusing that every pro-AI comment in this thread ignores all four of the points I made in this suggestion.
I would suggest that everybody making arguments that amount to "AI is here to stay" carefully consider what happened with the "crypto is here to stay" crowd once it become clear that the only winners in that space were the ones selling mining chips.
-
Ae commented
My issue is this: before putting efforts on AI, basic functions should be prioritised, like offline mode for mail and calendar, sorting options for mail, contact and event search for mobile, just to name some.
AI doesn't make sense when such basic functions are lacking. -
Klara commented
Having read through the comments, I can see this is a very divisive proposal. I think I would trust Proton to implement AI in a secure and ethical way, though whether I would actually use the tools myself is unlikely. I freely admit that I don't know enough about AI and the ethical and environmental considerations of it to judge which of the concerns raised in this suggestion would apply to whatever form of AI tool Proton would introduce. I would appreciate Proton releasing a statement which addresses all concerns clearly and transparently, what kind of AI tools they're considering and how users could opt not to use them if they don’t want to.
-
M commented
Looks like people don't understand what AI means (AI is not something new at all), and are just scared of the word. It also feels like a lot of you don’t understand what e2e encrypted means…
Our data are encrypted, so as long as whatever AI features they add don't interfere with our privacy, there are *no* reasons to be against it.
AI and more specifically genAI is the next big thing whether people like it or not, if proton doesn't embrace it in some ways, the company will be out of business within the next 5 to 10 years. You wouldn’t pay to use a service that hasn’t changed since 1999 would you? It’s the same here, in 5 to 10 years AI will have completely changed how we write and respond to emails, how we find our files in our drive, how we look for pictures, and how we interact with tech in general. Proton must adapt, and there are ways to do this without being unethical or violating privacy.
It’s not because OpenAI and Google went the evil way that Proton can’t do it the right way. Want a proof? Y’all moved from Gmail to Proton for that specific reason. Just continue to trust Proton to do the right thing.
Saying that there are no reasons to innovate for the sake of innovating means you just don’t understand how the world works. Innovations are why we have electricity, internet, computers, vaccins, cars, movies, and don’t die at the old age of 30 years old.
Y’all need to educate yourself and stop being afraid of changes. Unfounded fears and conservatism always end up with ****** outcomes.
-
Clark Everson commented
I actually strongly disagree with this and wish there was a way to vote exactly opposite. The wording of this very much shows not understanding how ai models work. There are many ways they could build their own self contained ai. It doesn’t even need to be trained on user data similar to how the protected model works in tabnine.
-
Ben Anderson commented
This suggestion has to be a joke XD No way can you seriously expect an Internet company to forsake the biggest technological leap of our lifetimes. It's like asking a store to commit to not selling products online.
-
Mehmet commented
I don't support this proposal. Proton cannot develop an AI that runs on the cloud. Our data is encrypted and can't be decrypted on their servers. So, the AI must run on the user's device (It must be a tiny model since it will run on mobile devices and desktops). I'm OK with that.
However, I mostly agree with what Tully says about cloud AIs.