Made a lil plugin for llm
that only actually calls the llm if it’s a new prompt. Should save a little time and money, especially when running evals. –> GitHub - kevinschaul/llm-cache-plugin: Check whether you’ve already run this prompt before calling the LLM