@perplexity_ai
Code LLaMA is now on Perplexity’s LLaMa Chat! Try asking it to write a function for you, or explain a code snippet: 🔗 https://t.co/rwcPzknBgE This is the fastest way to try @MetaAI’s latest code-specialized LLM. With our model deployment expertise, we are able to provide you with this model less than 24 hours of it’s release. What’s next? We’ll integrate code LLaMA into Perplexity, all in service of providing you with the best answers to your most technical questions!