Transform speech transcripts into valuable insights with our advanced LLM-powered analysis tool featuring multi-modal capabilities
Clean, intuitive UI with a convenient sidebar, dark/light themes, and a fluid user experience designed for productivity.
Seamlessly connect with top LLM providers including Cerebras, Groq, Google Gemini, Sambanova, and more for optimal analysis.
Powerful image processing with OCR, screenshot analysis, and vision model support for text-image multimodal analysis.
Watch AI responses generate in real-time for immediate feedback and dynamic interaction with language models.
Full support for both English and Chinese interfaces, easily toggled with a single click to accommodate global users.
Fine-tune with custom prefix/suffix text, temperature controls, and model-specific settings to perfect your AI outputs.
Meticulously crafted light and dark themes for optimal viewing experience in any lighting condition.
Smart prompting features with customizable templates to get the most relevant and insightful responses from AI models.
git clone https://github.com/Franklyc/Transcript-companion.git
cd Transcript-companion
pip install -r requirements.txt
config.py.example to config.py and prefix.py.example to prefix.pyconfig.py and replace the placeholders with your actual API keysDEFAULT_FOLDER_PATH in config.py to point to your transcript files directoryget_original_prefix() function in prefix.py to define initial instructions for the LLMpython main.py to start Transcript Companion