Python interpreter
Hi there, I'm a Proton user from around 4–5 years. I consider myself an early adopter of new technologies, so I've been testing out ChatGPT, Gemini, and other variants of LLM chats. One of my main uses of them is as a researcher and engineer, so I want LLM that are robust, in the sense all a LLM can be robust and predictable. But, in the sense, I want a "translator" o a "meta-transpilator" that translate commands into actions to be performed. For example, the task "translate this from Spanish to Catalan", is in fact a direct task that involves translation. But describing a need for coding a Python snippet to draw a figure using matplotlib is also a translation.
Until a month a go, before the holiday season in Europe, I was using ChatGPT 4o as my default model and interface to do such tasks. As you will know, the ChatGPT interface allows running a IPython kernel in a sandboxed environment. For me, this is more than enough to really speed up writing papers, automating the boring stuff in my day to day, etc.
Could this idea be brought to Lumo? I understand this is not an easy request, but a powerful one for Lumo. If running a sandboxed IPython environment in Lumo's servers could be a security issue, I propose a very straight forward cool trick that IPython kernels can perform. In the same way Google Colaboratory can connect to a local IPython kernel, using a TCP/IP socket from the local computer, sending code from the local machine using JS to the local port to be executed in the kernel, Lumo could potentially do so, by developing the necessary software in the client to extract the code from the Lumo results and launch it in the machine.
Cheers,
Ismael