🐦 Twitter Post Details

Viewing enriched Twitter post

@omarsar0

Gemini 1.5 Pro and its 1M tokens context length show huge potential! I have been experimenting with Gemini 1.5 Pro (inside Google AI Studio) and find that its reasoning ability over long-form content is quite good. I am particularly interested in LLMs that can retrieve and reason over long contexts across different modalities. This is what unlocks all kinds of complex use cases. For now, my experiments are around scientific papers and the kind of complex analysis or questions the model can accurately answer. In the screenshot, we prompt the model with two papers as input. The model needs to analyze both papers before it can return an answer. What I found interesting in the response it gave me is that it even analyzed tables before it sent back a response. It's exciting to see this type of analysis on the fly without using a RAG system. Beyond this, we can ask for more concrete explanations of findings and experiments by giving it more context. You can also prompt the model to extend a survey paper based on recent papers or even generate your own based on a desired format. And a whole lot more. A full analysis and more examples of Gemini 1.5 Pro will be published in the promoting guide soon. Stay tuned!

🔧 Raw API Response

{
  "user": {
    "created_at": "2015-09-04T12:59:26.000Z",
    "default_profile_image": false,
    "description": "Building with LLMs, RAG, and AI Agents @dair_ai • Prev: Meta AI, Galactica LLM, PapersWithCode, PhD • Creator of the Prompting Guide (~3M learners)",
    "fast_followers_count": 0,
    "favourites_count": 24671,
    "followers_count": 183517,
    "friends_count": 465,
    "has_custom_timelines": true,
    "is_translator": false,
    "listed_count": 3278,
    "location": "",
    "media_count": 1948,
    "name": "elvis",
    "normal_followers_count": 183517,
    "possibly_sensitive": false,
    "profile_banner_url": "https://pbs.twimg.com/profile_banners/3448284313/1565974901",
    "profile_image_url_https": "https://pbs.twimg.com/profile_images/939313677647282181/vZjFWtAn_normal.jpg",
    "screen_name": "omarsar0",
    "statuses_count": 10319,
    "translator_type": "regular",
    "url": "https://t.co/H9w2yq9w1L",
    "verified": true,
    "withheld_in_countries": [],
    "id_str": "3448284313"
  },
  "id": "1759393865499369912",
  "conversation_id": "1759393865499369912",
  "full_text": "Gemini 1.5 Pro and its 1M tokens context length show huge potential!\n\nI have been experimenting with Gemini 1.5 Pro (inside Google AI Studio) and find that its reasoning ability over long-form content is quite good. \n\nI am particularly interested in LLMs that can retrieve and reason over long contexts across different modalities. This is what unlocks all kinds of complex use cases. \n\nFor now, my experiments are around scientific papers and the kind of complex analysis or questions the model can accurately answer. \n\nIn the screenshot, we prompt the model with two papers as input. The model needs to analyze both papers before it can return an answer. What I found interesting in the response it gave me is that it even analyzed tables before it sent back a response. It's exciting to see this type of analysis on the fly without using a RAG system. \n\nBeyond this, we can ask for more concrete explanations of findings and experiments by giving it more context. You can also prompt the model to extend a survey paper based on recent papers or even generate your own based on a desired format. And a whole lot more. \n\nA full analysis and more examples of Gemini 1.5 Pro will be published in the promoting guide soon. Stay tuned!",
  "reply_count": 21,
  "retweet_count": 52,
  "favorite_count": 384,
  "hashtags": [],
  "symbols": [],
  "user_mentions": [],
  "urls": [],
  "media": [
    {
      "media_url": "https://pbs.twimg.com/media/GGqfOw4XYAw_r0U.jpg",
      "type": "photo"
    }
  ],
  "url": "https://twitter.com/omarsar0/status/1759393865499369912",
  "created_at": "2024-02-19T01:46:00.000Z",
  "#sort_index": "1759393865499369912",
  "view_count": 52056,
  "quote_count": 2,
  "is_quote_tweet": false,
  "is_retweet": false,
  "is_pinned": false,
  "is_truncated": true,
  "startUrl": "https://twitter.com/omarsar0/status/1759393865499369912"
}