Using local LLMs to extract and summarize audio transcripts from YouTube and podcasts. Examples: DESeq2 tutorial from YouTube, and the Nextflow Podcast.
Did you ran into problems using Ollama because of the prompt length?
Not with these small examples, but context length is getting longer for many open models these days.
Did you ran into problems using Ollama because of the prompt length?
Not with these small examples, but context length is getting longer for many open models these days.