@sh_reya
One of the most pressing questions in our AI Evals course is: "Why canât I just have an LLM write my LLM pipeline?" The nuanced answer is that you can use LLMs to assist, but not for the whole pipeline. Knowing where to put the LLM in the loop is the hard part. To unpack this, we invited Omar Khattab (@lateinteraction) âcreator of DSPy, leading expert on prompt optimization, and now professor at MITâfor a "fireside chat" in the course. He shed light on how he approaches pipeline development in practice. What stood out to us is that Omar spends most of his time on specificationâe.g., defining the task clearly, looking at the data, and doing careful error analysisâbefore letting LLMs automate anything. This up-front rigor is what makes downstream optimization actually work. We've put the recording on YouTube. If you're wondering how Omar thinks about these tradeoffs, this conversation is worth a listen! https://t.co/j3D83hRLKW