🐦 Twitter Post Details

Viewing enriched Twitter post

@omarsar0

Mistral AI is doubling down on small language models. Their latest Ministral models (both the 3B and 8B) are pretty impressive and will be incredibly useful for a lot of LLM workflows. Some observations: I enjoy seeing how committed Mistral AI is to developing smaller and more capable models. They seem to understand what developers want and need today. There is huge competition for the finest, smallest, and cheapest models. This is good for the AI developer community. This sets up the community really well in terms of the wave of innovation that’s coming around on-device AI and agentic workflows. 2025 is going to be a wild year. They don’t mention the secret sauce behind these capable smaller models (probably some distillation happening), the Ministral 3B model already performs competitively with Mistral 7B. I think this is a great focus of Mistral as they seek to differentiate from other LLM providers. Given this announcement, I am now super curious about what the next Gemma and Llama small models are going to bring. Mini models are taking over! I use small models for processing data, structuring information, function calling, routing, evaluation pipelines, prompt chaining, agentic workflows, and a whole lot more.

Media 1

📊 Media Metadata

{
  "data": [
    {
      "id": "",
      "type": "photo",
      "url": null,
      "media_url": "https://pbs.twimg.com/media/GaCOZmZXYAAi9xC.jpg",
      "media_url_https": null,
      "display_url": null,
      "expanded_url": null
    }
  ],
  "score": 1.0,
  "scored_at": "2025-08-09T13:46:07.551605",
  "import_source": "network_archive_import",
  "media": [
    {
      "type": "photo",
      "url": "https://crmoxkoizveukayfjuyo.supabase.co/storage/v1/object/public/media/posts/1846632835165327561/media_0.jpg?",
      "filename": "media_0.jpg",
      "original_url": "https://pbs.twimg.com/media/GaCOZmZXYAAi9xC.jpg"
    }
  ],
  "storage_migrated": true
}

🔧 Raw API Response

{
  "user": {
    "created_at": "2015-09-04T12:59:26.000Z",
    "default_profile_image": false,
    "description": "Building with AI Agents @dair_ai • Prev: Meta AI, Elastic, Galactica LLM, PhD • I also teach how to build with LLMs, RAG & AI Agents ⬇️",
    "fast_followers_count": 0,
    "favourites_count": 27933,
    "followers_count": 216711,
    "friends_count": 532,
    "has_custom_timelines": true,
    "is_translator": false,
    "listed_count": 3689,
    "location": "",
    "media_count": 2656,
    "name": "elvis",
    "normal_followers_count": 216711,
    "possibly_sensitive": false,
    "profile_banner_url": "https://pbs.twimg.com/profile_banners/3448284313/1565974901",
    "profile_image_url_https": "https://pbs.twimg.com/profile_images/939313677647282181/vZjFWtAn_normal.jpg",
    "screen_name": "omarsar0",
    "statuses_count": 12439,
    "translator_type": "regular",
    "url": "https://t.co/JBU5beHQNs",
    "verified": true,
    "withheld_in_countries": [],
    "id_str": "3448284313"
  },
  "id": "1846632835165327561",
  "conversation_id": "1846632835165327561",
  "full_text": "Mistral AI is doubling down on small language models. \n\nTheir latest Ministral models (both the 3B and 8B) are pretty impressive and will be incredibly useful for a lot of LLM workflows.\n\nSome observations:\n\nI enjoy seeing how committed Mistral AI is to developing smaller and more capable models.\n\nThey seem to understand what developers want and need today.\n\nThere is huge competition for the finest, smallest, and cheapest models. This is good for the AI developer community.\n\nThis sets up the community really well in terms of the wave of innovation that’s coming around on-device AI and agentic workflows. 2025 is going to be a wild year. \n\nThey don’t mention the secret sauce behind these capable smaller models (probably some distillation happening), the Ministral 3B model already performs competitively with Mistral 7B. I think this is a great focus of Mistral as they seek to differentiate from other LLM providers.\n\nGiven this announcement, I am now super curious about what the next Gemma and Llama small models are going to bring. Mini models are taking over!\n\nI use small models for processing data, structuring information, function calling, routing, evaluation pipelines, prompt chaining, agentic workflows, and a whole lot more.",
  "reply_count": 10,
  "retweet_count": 50,
  "favorite_count": 229,
  "hashtags": [],
  "symbols": [],
  "user_mentions": [],
  "urls": [],
  "media": [
    {
      "media_url": "https://pbs.twimg.com/media/GaCOZmZXYAAi9xC.jpg",
      "type": "photo"
    }
  ],
  "url": "https://twitter.com/omarsar0/status/1846632835165327561",
  "created_at": "2024-10-16T19:22:31.000Z",
  "#sort_index": "1846632835165327561",
  "view_count": 20885,
  "quote_count": 3,
  "is_quote_tweet": false,
  "is_retweet": false,
  "is_pinned": false,
  "is_truncated": true,
  "startUrl": "https://x.com/omarsar0/status/1846632835165327561"
}