🐦 Twitter Post Details

Viewing enriched Twitter post

@omarsar0

Reasoning LLMs is one of the most interesting trends to watch going into 2025. I’ve been thinking a lot about how to build with reasoning LLMs, specifically agentic workflows. How can AI devs take advantage of components like MoA and MCTS when there is barely any research for it, not to mention the lack of insights and best practices? First, how do we enable devs to build with reasoning capabilities? I like how Nous Research is approaching this with their Forge Reasoning APIs and “Reasoning Layer” components (MoA, MCTS, and Chain of Code). I think it’s way too early for such a reasoning layer but it seems that things are quickly moving in that direction; the o1 model series together with this forge reasoning API is a good indication of what’s to come. Some thoughts on the Forge Reasoning API vs o1 for language agents: I’ve been experimenting extensively with the o1 models and they are hard to customise. However, for many multi-agent systems, there is a need to get them to take on a persona that helps produce richer and more reliable outputs and facilitates better communication between agents. To achieve this, I often need to prompt the agents to behave and act a certain way and take on different roles depending on where they are in the conversation or process. Having the ability to use the right reasoning component or a combination, with configurable parameters (similar to the LLM itself), will be useful to build more complex and effective agentic systems. Customization is key here. Extended thoughts here: https://t.co/XglPppukqc

Media 1

📊 Media Metadata

{
  "score": 1.0,
  "scored_at": "2025-08-09T13:46:07.553267",
  "import_source": "network_archive_import",
  "links_checked": true,
  "checked_at": "2025-08-10T10:32:52.367021",
  "media": [
    {
      "type": "photo",
      "url": "https://crmoxkoizveukayfjuyo.supabase.co/storage/v1/object/public/media/posts/1857569591830130854/media_0.jpg?",
      "filename": "media_0.jpg"
    },
    {
      "media_url": "https://pbs.twimg.com/media/GcdpV7eXUAA8tGH.jpg",
      "type": "photo"
    }
  ],
  "reprocessed_at": "2025-08-12T15:26:18.854716",
  "reprocessed_reason": "missing_media_array",
  "original_structure": "had_both"
}

🔧 Raw API Response

{
  "user": {
    "created_at": "2015-09-04T12:59:26.000Z",
    "default_profile_image": false,
    "description": "Building with AI Agents @dair_ai • Prev: Meta AI, Elastic, Galactica LLM, PhD • I also teach how to build with LLMs, RAG & AI Agents ⬇️",
    "fast_followers_count": 0,
    "favourites_count": 27933,
    "followers_count": 216713,
    "friends_count": 532,
    "has_custom_timelines": true,
    "is_translator": false,
    "listed_count": 3688,
    "location": "",
    "media_count": 2656,
    "name": "elvis",
    "normal_followers_count": 216713,
    "possibly_sensitive": false,
    "profile_banner_url": "https://pbs.twimg.com/profile_banners/3448284313/1565974901",
    "profile_image_url_https": "https://pbs.twimg.com/profile_images/939313677647282181/vZjFWtAn_normal.jpg",
    "screen_name": "omarsar0",
    "statuses_count": 12439,
    "translator_type": "regular",
    "url": "https://t.co/JBU5beHQNs",
    "verified": true,
    "withheld_in_countries": [],
    "id_str": "3448284313"
  },
  "id": "1857569591830130854",
  "conversation_id": "1857569591830130854",
  "full_text": "Reasoning LLMs is one of the most interesting trends to watch going into 2025.\n\nI’ve been thinking a lot about how to build with reasoning LLMs, specifically agentic workflows.\n\nHow can AI devs take advantage of components like MoA and MCTS when there is barely any research for it, not to mention the lack of insights and best practices?\n\nFirst, how do we enable devs to build with reasoning capabilities?\n\nI like how Nous Research is approaching this with their Forge Reasoning APIs and “Reasoning Layer” components (MoA, MCTS, and Chain of Code). \n\nI think it’s way too early for such a reasoning layer but it seems that things are quickly moving in that direction; the o1 model series together with this forge reasoning API is a good indication of what’s to come.\n\nSome thoughts on the Forge Reasoning API vs o1 for language agents:\n\nI’ve been experimenting extensively with the o1 models and they are hard to customise. However, for many multi-agent systems, there is a need to get them to take on a persona that helps produce richer and more reliable outputs and facilitates better communication between agents. To achieve this, I often need to prompt the agents to behave and act a certain way and take on different roles depending on where they are in the conversation or process. Having the ability to use the right reasoning component or a combination, with configurable parameters (similar to the LLM itself), will be useful to build more complex and effective agentic systems. Customization is key here.\n\nExtended thoughts here: https://t.co/XglPppukqc",
  "reply_count": 9,
  "retweet_count": 47,
  "favorite_count": 192,
  "hashtags": [],
  "symbols": [],
  "user_mentions": [],
  "urls": [],
  "media": [
    {
      "media_url": "https://pbs.twimg.com/media/GcdpV7eXUAA8tGH.jpg",
      "type": "photo"
    }
  ],
  "url": "https://twitter.com/omarsar0/status/1857569591830130854",
  "created_at": "2024-11-15T23:41:17.000Z",
  "#sort_index": "1857569591830130854",
  "view_count": 16296,
  "quote_count": 3,
  "is_quote_tweet": false,
  "is_retweet": false,
  "is_pinned": false,
  "is_truncated": true,
  "startUrl": "https://x.com/omarsar0/status/1857569591830130854"
}