Expose the underlying model name
I understand that offerings such as copilot also hide the underlying models, but I would really appreciate it if the model used with any given answer would be named. Currently, we only have a list in the supporting documentation of the possible models that may be used, and then in the answer it isn't reveled anywhere which model generated it. If the user directly asks, then the model will refuse. I assume this is because the model itself doesn't know, or it has been forbidden from revealing by the system prompt. I can understand the business incentive, but it is difficult to give such a level of control to Proton as to not even know which model has generated the answer.
-
kinkkuli
commented
Leyendo tu experiencia con las reseñas de plinko me hizo pensar en cómo yo también busco desconectar un poco después de un día largo. Hace poco probé https://vegashero.com.es y empecé con apuestas pequeñas para ver cómo funcionaba todo. Los bonos para jugadores en España me sorprendieron porque alargaron bastante la sesión y mantuvieron el ritmo interesante. Fue una forma sencilla de relajarme un rato y despejar la mente antes de continuar con el resto del día.
-
Cedric
commented
All the models listed on the Proton Lumo documentation are between 12 billion and 32 billion parameters.
All these models are not as good as the state of the art models like are OpenAI GPT-3, GPT-4 and GPT-5. See https://protonmail.uservoice.com/forums/932842-lumo/suggestions/50338284-employ-bigger-models
Consequently, it does not seem that useful to know which model Lumo use for each user request since the details they give is enough to know that it is not going to be a great model.
I mean, sure, it would be nice to know if it is a 12 billion vs a 32 billion model. But it is still far from the 200-10000 billion models used by OpenAI GPT and similar competitors models.
At the moment, if you care about state of the art results, you unfortunately can't use Lumo.
-
B
commented
https://proton.me/support/lumo-privacy#open-source:~:text=Open%2Dsource%20language%20models,-Lumo
> The models we’re using currently are Nemo, OpenHands 32B, OLMO 2 32B, and Mistral Small 3.
-
Edward Tjörnhammar
commented
Just adding to this; Transparency is key when building trust in services which espouses privacy. As such it would be prudent to provide a model card for each of the underlying base models, i.e. what the system uses as a base and if it has been fine-tuned. Even better would be to go one step further and elaborate on data sources, alignment and potential biases. At the very least the lumo team should provide a statement why they choose not to publish such information, at this point in time.