🐦 Twitter Post Details

Viewing enriched Twitter post

@omarsar0

The Gemini 1.5 Pro model guide is live! With support of up to 1 million tokens context length, you may be wondering what's possible with Gemini 1.5 Pro. My overall impression after our first round of testing is that Gemini 1.5 Pro is among the most powerful long context LLMs available today. I've published a summary of Gemini 1.5 Pro's capabilities along with concrete examples in the prompting guide. These are just preliminary tests. I will continue to analyze and document the model's capabilities and limitations. Stay tuned! From preliminary experiments, Gemini 1.5 Pro shows impressive capabilities around multimodal reasoning, video understanding, long document question answering, code reasoning on entire codebases, and in-context learning. One insight from testing this model is that we will have different kinds of LLMs that support different types of use cases. Gemini 1.5 Pro is not meant to be a model to reign among all. The long context LLMs are not meant to cover every use case imaginable, they are meant to unlock complex use cases that were unimaginable before with LLMs. Link to guide below ↓

🔧 Raw API Response

{
  "user": {
    "created_at": "2015-09-04T12:59:26.000Z",
    "default_profile_image": false,
    "description": "Building with LLMs, RAG, and AI Agents @dair_ai • Prev: Meta AI, Galactica LLM, PapersWithCode, PhD • Creator of the Prompting Guide (~3M learners)",
    "fast_followers_count": 0,
    "favourites_count": 24671,
    "followers_count": 183512,
    "friends_count": 465,
    "has_custom_timelines": true,
    "is_translator": false,
    "listed_count": 3278,
    "location": "",
    "media_count": 1948,
    "name": "elvis",
    "normal_followers_count": 183512,
    "possibly_sensitive": false,
    "profile_banner_url": "https://pbs.twimg.com/profile_banners/3448284313/1565974901",
    "profile_image_url_https": "https://pbs.twimg.com/profile_images/939313677647282181/vZjFWtAn_normal.jpg",
    "screen_name": "omarsar0",
    "statuses_count": 10319,
    "translator_type": "regular",
    "url": "https://t.co/H9w2yq9w1L",
    "verified": true,
    "withheld_in_countries": [],
    "id_str": "3448284313"
  },
  "id": "1759936567672479872",
  "conversation_id": "1759936567672479872",
  "full_text": "The Gemini 1.5 Pro model guide is live!  \n\nWith support of up to 1 million tokens context length, you may be wondering what's possible with Gemini 1.5 Pro.\n\nMy overall impression after our first round of testing is that Gemini 1.5 Pro is among the most powerful long context LLMs available today. \n\nI've published a summary of Gemini 1.5 Pro's capabilities along with concrete examples in the prompting guide. These are just preliminary tests. I will continue to analyze and document the model's capabilities and limitations. Stay tuned!\n\nFrom preliminary experiments, Gemini 1.5 Pro shows impressive capabilities around multimodal reasoning, video understanding, long document question answering, code reasoning on entire codebases, and in-context learning.\n\nOne insight from testing this model is that we will have different kinds of LLMs that support different types of use cases. Gemini 1.5 Pro is not meant to be a model to reign among all. The long context LLMs are not meant to cover every use case imaginable, they are meant to unlock complex use cases that were unimaginable before with LLMs.  \n\nLink to guide below ↓",
  "reply_count": 14,
  "retweet_count": 73,
  "favorite_count": 338,
  "hashtags": [],
  "symbols": [],
  "user_mentions": [],
  "urls": [],
  "media": [
    {
      "media_url": "https://pbs.twimg.com/media/GGyIyuCW4AEA822.jpg",
      "type": "photo"
    }
  ],
  "url": "https://twitter.com/omarsar0/status/1759936567672479872",
  "created_at": "2024-02-20T13:42:30.000Z",
  "#sort_index": "1759936567672479872",
  "view_count": 65409,
  "quote_count": 5,
  "is_quote_tweet": false,
  "is_retweet": false,
  "is_pinned": false,
  "is_truncated": true,
  "startUrl": "https://twitter.com/omarsar0/status/1759936567672479872"
}