🐦 Twitter Post Details

Viewing enriched Twitter post

@iScienceLuvr

PaLI-3 Vision Language Models: Smaller, Faster, Stronger abs: https://t.co/VATjcGkZXi Uses a 2B SigLIP vision encoder and 3B UL2 language model to obtain SOTA performance on visually-situated text understanding tasks. SigLIP observed to be a better encoder than classification-pretrained ViT. Model generalizes to video understanding tasks despite not being trained with videos.

Media 1

📊 Media Metadata

{
  "data": [
    {
      "id": "",
      "type": "photo",
      "url": null,
      "media_url": "https://pbs.twimg.com/media/F8XLJh9awAA48_W.jpg",
      "media_url_https": null,
      "display_url": null,
      "expanded_url": null
    }
  ],
  "score": 0.89,
  "scored_at": "2025-08-09T13:47:19.520807",
  "import_source": "manual_curation_2023",
  "media": [
    {
      "type": "photo",
      "url": "https://crmoxkoizveukayfjuyo.supabase.co/storage/v1/object/public/media/posts/1712999591426285792/media_0.jpg?",
      "filename": "media_0.jpg",
      "original_url": "https://pbs.twimg.com/media/F8XLJh9awAA48_W.jpg"
    }
  ],
  "storage_migrated": true
}

🔧 Raw API Response

{
  "user": {
    "created_at": "2011-12-20T03:45:50.000Z",
    "default_profile_image": false,
    "description": "PhD at 19 |\nFounder and CEO at @MedARC_AI |\nResearch Director at @StabilityAI | \n@kaggle Notebooks GM |\nBiomed. engineer @ 14 |\nTEDx talk➡https://t.co/DwMkst4bnG",
    "fast_followers_count": 0,
    "favourites_count": 60004,
    "followers_count": 45437,
    "friends_count": 995,
    "has_custom_timelines": true,
    "is_translator": false,
    "listed_count": 703,
    "location": "",
    "media_count": 1203,
    "name": "Tanishq Mathew Abraham, PhD",
    "normal_followers_count": 45437,
    "possibly_sensitive": false,
    "profile_banner_url": "https://pbs.twimg.com/profile_banners/441465751/1675968078",
    "profile_image_url_https": "https://pbs.twimg.com/profile_images/1553508977735962624/nnlSwBmu_normal.jpg",
    "screen_name": "iScienceLuvr",
    "statuses_count": 12087,
    "translator_type": "none",
    "url": "https://t.co/nNzCz2VVd1",
    "verified": false,
    "withheld_in_countries": [],
    "id_str": "441465751"
  },
  "id": "1712999591426285792",
  "conversation_id": "1712999591426285792",
  "full_text": "PaLI-3 Vision Language Models: Smaller, Faster, Stronger\n\nabs: https://t.co/VATjcGkZXi\n\nUses a 2B SigLIP vision encoder and 3B UL2 language model to obtain SOTA performance on visually-situated text understanding tasks. SigLIP observed to be a better encoder than classification-pretrained ViT. Model generalizes to video understanding tasks despite not being trained with videos.",
  "reply_count": 2,
  "retweet_count": 15,
  "favorite_count": 122,
  "hashtags": [],
  "symbols": [],
  "user_mentions": [],
  "urls": [
    {
      "url": "https://t.co/r25dK0yx15",
      "expanded_url": "https://openreview.net/forum?id=JpyWPfzu0b",
      "display_url": "openreview.net/forum?id=JpyWP…"
    }
  ],
  "media": [
    {
      "media_url": "https://pbs.twimg.com/media/F8XLJh9awAA48_W.jpg",
      "type": "photo"
    }
  ],
  "url": "https://twitter.com/iScienceLuvr/status/1712999591426285792",
  "created_at": "2023-10-14T01:11:43.000Z",
  "#sort_index": "1712999591426285792",
  "view_count": 23231,
  "quote_count": 2,
  "is_quote_tweet": false,
  "is_retweet": false,
  "is_pinned": false,
  "is_truncated": true,
  "startUrl": "https://twitter.com/iscienceluvr/status/1712999591426285792"
}