Skip to content

Cedric

My feedback

18 results found

  1. 12 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Lumo » New feature  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
    An error occurred while saving the comment
    Cedric commented  · 

    I agree it would be nice. Though I don't know how lumo manages privacy when you use the Web search toggle and it starts using Internet live content.

    Cedric supported this idea  · 
  2. 2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Lumo » New feature  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
    Cedric supported this idea  · 
  3. 3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
    Cedric supported this idea  · 
  4. 67 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
    An error occurred while saving the comment
    Cedric commented  · 
    An error occurred while saving the comment
    Cedric commented  · 
    An error occurred while saving the comment
    Cedric commented  · 
    An error occurred while saving the comment
    Cedric commented  · 

    To give a bit of context.

    GPT-3: 175 billion total parameters (publicly confirmed in https://github.com/openai/gpt-3)

    GPT-4: is estimated at 1000-2000 billion total parameters (not publicly confirmed, see https://explodingtopics.com/blog/gpt-parameters) but this info is considered not relevant anymore due to using a Mixture-of-Experts (MoE) architecture. Only a subset of parameters are active per token, ie a few very well chosen number of parameters that are "experts" for the given token. GPT-4 is estimated to ~200 billions active parameters per token.

    The advantage of the MoE approach is to train the model on huge numbers of parameters so it performs better (qualitatively). And restrict its active parameters to a smaller subset so it performs faster (quantitatively).

    GPT-5: is estimated at 1000-10000 billion total parameters (not publicly confirmed, see https://www.cometapi.com/how-many-parameters-does-gpt-5-have/), also MoE. It is estimated at 200-600 active parameters per token.

    Even if the above are approximations, the models used by Proton are way smaller, and are not MoE. However, MoE models are standard nowadays, and they will surely become outdated with better models soon...

    According to Proton (https://proton.me/support/lumo-privacy#open-source:~:text=Open%2Dsource%20language%20models,-Lumo), the models Lumo currently uses are: Nemo, OpenHands 32B, OLMO 2 32B and Mistral Small 3.

    OpenHands 32B: 32 billion parameters (https://huggingface.co/OpenHands/openhands-lm-32b-v0.1)

    OLMO 2 32B: 32 billion parameters (https://learnprompting.org/blog/ai2-released-olmo2-32b)

    Mistral Small 3: smaller model, 24 billion parameters (https://mistral.ai/news/mistral-small-3)

    Nemo (Mistral): even smaller model, 12 billion parameters (https://mistral.ai/news/mistral-nemo)

    Cedric supported this idea  · 
  5. 3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Lumo » New feature  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
    An error occurred while saving the comment
    Cedric commented  · 

    Indeed to give additionally privacy protection (from local attacks)

    Cedric supported this idea  · 
  6. 1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
    An error occurred while saving the comment
    Cedric commented  · 

    You are wrong. Luma does not share what you say to it. So privacy is fine with luma. And with locally hosted AI. But not with other online providers.

  7. 3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Lumo » New feature  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
    Cedric supported this idea  · 
  8. 3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
    An error occurred while saving the comment
    Cedric commented  · 

    I don't agree.

    We don't want lumo to use models like GPT-5, GPT-5 Thinking, Gemini 2.5 Pro, Grok 4, Claude Sonnet 4.5 as we would lose the privacy.

    Instead, we want lumo to use better open models. See https://protonmail.uservoice.com/forums/932842-lumo/suggestions/50338284-employ-bigger-models

  9. 5 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
    Cedric supported this idea  · 
  10. 5 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Lumo » New feature  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
    Cedric supported this idea  · 
  11. 7 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Lumo » New feature  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
    Cedric supported this idea  · 
  12. 8 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
    Cedric supported this idea  · 
  13. 8 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
    Cedric supported this idea  · 
  14. 22 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    2 comments  ·  Lumo » New feature  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
    Cedric supported this idea  · 
  15. 28 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    2 comments  ·  Lumo » New feature  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
    Cedric supported this idea  · 
  16. 109 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    4 comments  ·  Lumo » New feature  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
    An error occurred while saving the comment
    Cedric commented  · 

    All the models listed on the Proton Lumo documentation are between 12 billion and 32 billion parameters.

    All these models are not as good as the state of the art models like are OpenAI GPT-3, GPT-4 and GPT-5. See https://protonmail.uservoice.com/forums/932842-lumo/suggestions/50338284-employ-bigger-models

    Consequently, it does not seem that useful to know which model Lumo use for each user request since the details they give is enough to know that it is not going to be a great model.

    I mean, sure, it would be nice to know if it is a 12 billion vs a 32 billion model. But it is still far from the 200-10000 billion models used by OpenAI GPT and similar competitors models.

    At the moment, if you care about state of the art results, you unfortunately can't use Lumo.

  17. 193 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
    Cedric supported this idea  · 
  18. 164 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    22 comments  ·  Lumo » New feature  ·  Admin →
    How important is this to you?

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
    An error occurred while saving the comment
    Cedric commented  · 

    The simplest is probably to implement an OpenAI-compatible API. See https://bentoml.com/llm/llm-inference-basics/openai-compatible-api

    Cedric supported this idea  · 

Feedback and Knowledge Base