Replies: 1 comment
-
The auto scroll behavior definitely needs another refactor. I wrote that code pretty early, and it was some pretty tricky code since I had to figure out which events to listen for in the turbo stream updating cycle. Actually, the really tricky part was trying to distinguish between the browser auto-scrolling so you could keep reading it and the user scrolling to try and get away from it. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
When starting a new conversation, the window rendering is not fluent. It's like the LLM response is intermittently being shown and then hidden. And being shown, and being hidden again. Starting from the second LLM response in a conversation, the LLM streaming response is shown normally. It's always just the first conversation response that has this behaviour.
While the bot is responding and the LLM response streams into the conversation, one cannot scroll up to a previous part of the conversation. The response streaming forces the window to always show the newest streamed text. This is quite annoying for me, because I am often reading over some previous conversation parts while the new LLM response is received. Especially when using the LLM for coding.
This scrolling behaviour even appears when skimming through previous, "finished" conversations in the history side bar. It seems to me, as if some async. processing forces the window always to the most recent position of the conversation. After some while, browsing older conversation seems to return to normal and the behaviour disappears.
I am using Firefox as my standard browser and docker-compose to deploy on my own server.
Beta Was this translation helpful? Give feedback.
All reactions