Customized AI Ethics, taylored to the User
The current AI ethics is the standard CYA mode to protect proton from being sued. That is great for Proton, sorta, a waste of time for the rest of us, which lowers the value of Protons product. Having a private and encrypted AI session is meaningless if it isnt truly private, and not have Proton's lawyers looking over my shoulder.
I propose customized settings, complete with disclaimers and liability waivers if necessary. This will allow both side of the morality spectrum to be better served with this tool.
As it stands now, your tool is useless to a good percentage of medical professionals, and a larger percentage of entertainment professionals... used in a professional matter. That was just in 5 minutes of testing.
I appreciate the need to protect yourself Proton, but what is the point of creating a tool, if all it does is give you a bad reputation?
-
John
commented
The only thing off the table, is what is illegal where the model is, and the client. Everything else should be covered under varying liability waivers, and disclaimers.
The opposite is true as well. Even if I don't agree with it, a person should be able to lock down what enters their home, and is available to their children.