Ollama_rs crate has a streaming feature which allows the answer to be returned piece by piece, so I don’t have to wait a few minutes for the complete text. I made a simple UI in Iced that lets me ask a selected AI chat and get the answer, but I am unable to figure out how to leverate the streaming feature so I don’t have to wait for the whole answer.
I can see there are things like iced::Task::run and iced::Task::stream, but there are absolutely no explanations or examples on how to use them if they are proper for the task.
Anyone has an idea how to stream the answers from ollama to a text_editor?