🐦 Twitter Post Details

Viewing enriched Twitter post

@omarsar0

Asynchronous LLM Function Calling Proposes AsyncLM, a system for asynchronous LLM function calling. AsyncLM can reduce task completion latency from 1.6x-5.4x compared to synchronous function calling. It enables LLMs to generate and execute function calls concurrently. "We design an in-context protocol for function calls and interrupts, provide fine-tuning strategy to adapt LLMs to the interrupt semantics, and implement these mechanisms efficiently on LLM inference process." What's interesting about this interrupt mechanism is that it could be extended to other human-LLM or LLM-LLM interactions. Very cool paper!

🔧 Raw API Response

{
  "user": {
    "created_at": "2015-09-04T12:59:26.000Z",
    "default_profile_image": false,
    "description": "Building with AI Agents @dair_ai • Prev: Meta AI, Elastic, Galactica LLM, PhD • I also teach how to build with LLMs, RAG & AI Agents ⬇️",
    "fast_followers_count": 0,
    "favourites_count": 27931,
    "followers_count": 216714,
    "friends_count": 532,
    "has_custom_timelines": true,
    "is_translator": false,
    "listed_count": 3688,
    "location": "",
    "media_count": 2656,
    "name": "elvis",
    "normal_followers_count": 216714,
    "possibly_sensitive": false,
    "profile_banner_url": "https://pbs.twimg.com/profile_banners/3448284313/1565974901",
    "profile_image_url_https": "https://pbs.twimg.com/profile_images/939313677647282181/vZjFWtAn_normal.jpg",
    "screen_name": "omarsar0",
    "statuses_count": 12439,
    "translator_type": "regular",
    "url": "https://t.co/JBU5beHQNs",
    "verified": true,
    "withheld_in_countries": [],
    "id_str": "3448284313"
  },
  "id": "1866855077983686804",
  "conversation_id": "1866855077983686804",
  "full_text": "Asynchronous LLM Function Calling\n\nProposes AsyncLM, a system for asynchronous LLM function calling.\n\nAsyncLM can reduce task completion latency from 1.6x-5.4x compared to synchronous function calling.\n\nIt enables LLMs to generate and execute function calls concurrently.\n\n\"We design an in-context protocol for function calls and interrupts, provide fine-tuning strategy to adapt LLMs to the interrupt semantics, and implement these mechanisms efficiently on LLM inference process.\"\n\nWhat's interesting about this interrupt mechanism is that it could be extended to other human-LLM or LLM-LLM interactions. Very cool paper!",
  "reply_count": 5,
  "retweet_count": 88,
  "favorite_count": 442,
  "hashtags": [],
  "symbols": [],
  "user_mentions": [],
  "urls": [],
  "media": [
    {
      "media_url": "https://pbs.twimg.com/media/GehlPb7WIAA-mXn.jpg",
      "type": "photo"
    }
  ],
  "url": "https://twitter.com/omarsar0/status/1866855077983686804",
  "created_at": "2024-12-11T14:38:29.000Z",
  "#sort_index": "1866855077983686804",
  "view_count": 24298,
  "quote_count": 3,
  "is_quote_tweet": false,
  "is_retweet": false,
  "is_pinned": false,
  "is_truncated": true,
  "startUrl": "https://x.com/omarsar0/status/1866855077983686804"
}