Kaye
My feedback
4 results found
-
139 votes
Kaye
supported this idea
·
An error occurred while saving the comment -
306 votes
Kaye
supported this idea
·
-
1 vote
Kaye
shared this idea
·
-
22 votes
An error occurred while saving the comment
Kaye
commented
As someone who came to Proton Pass with well over 300 items from a chronic bad habit of importing and never cleaning anything up as I had bounced around between PW managers, I support this option as a QoL improvement.
My idea would be the ability to select multiple entries, click a link button, binding them into one complex entry with an unlink button if you need to add or remove specific entries. And by linking the entires, it should hide all but the one linked entry from search and being listed in "All Items" and whatever vault it's in reducing the number of entires and simplifying housekeeping.
Kaye
supported this idea
·
While I appreciate the protections Lumo offers by not tracking interactions and fully understand that is why Lumo refuses to answer unethical questions, I would rather sign a consent form allowing Proton to disclose every unethical question I have ever asked Lumo to my country's police force without notifying me first, than trust that data to another company's AI. My personal idea for an AI that answers unethical questions is at the bottom...
This is a tough topic to support, but real life isn't all rainbows and butterflys, and for writers to create works that are not harmful that include the realities in life, it's important to have a source of information where these gray areas can be explored safely. It's like the American TV series Breaking Bad, they didn't give the formula away on how to make that drug, but they had enough information in the show in order to make it believable, and that's the goal of ethical writers who venture into darker themes, to make it feel realistic enough to keep the immersion going, but not so real it causes harm.
This is a really challenging skill to develop as a person due to subjective perspectives, and I would really enjoy an AI I can safely expore the gray areas with and prevent detrimental mistakes in my writing when dealing with these sensitive topics.
As someone else has mentioned and I fully agree with, our society has pushed normalization on unethical topics too far, which other AI are all too happy to oblige in persisting, so I personally feel that having an AI that answers unethical questions in a helpful yet cautionary way that explains the moral dilemmas involved, while offering suggestions to make content less harmful without robbing it of all realism would be invaluable in the creative world.