🐦 Twitter Post Details

Viewing enriched Twitter post

@jiqizhixin

LeCun's JEPA has evolved into a vision-language model, with 1.6B parameters rivaling the 72B Qwen-VL. Instead of predicting words directly, the proposed VL-JEPA learns to predict the core "meaning" of a text in an abstract space, ignoring surface-level wording variations. This method outperforms standard token-based training with 50% fewer parameters. It beats models like CLIP & SigLIP2 on video classification/retrieval tasks and matches larger VLMs on VQA, while using a decoder only when needed to cut decoding ops by nearly 3x. VL-JEPA: Joint Embedding Predictive Architecture for Vision-language Paper: https://t.co/rGglBXvKex Our report: https://t.co/TXEHRquSBr

Media 1
Media 2

📊 Media Metadata

{
  "media": [
    {
      "url": "https://crmoxkoizveukayfjuyo.supabase.co/storage/v1/object/public/media/posts/2004483098235343338/media_0.jpg?",
      "media_url": "https://crmoxkoizveukayfjuyo.supabase.co/storage/v1/object/public/media/posts/2004483098235343338/media_0.jpg?",
      "type": "photo",
      "filename": "media_0.jpg"
    },
    {
      "url": "https://crmoxkoizveukayfjuyo.supabase.co/storage/v1/object/public/media/posts/2004483098235343338/media_2.jpg?",
      "media_url": "https://crmoxkoizveukayfjuyo.supabase.co/storage/v1/object/public/media/posts/2004483098235343338/media_2.jpg?",
      "type": "photo",
      "filename": "media_2.jpg"
    }
  ],
  "processed_at": "2025-12-31T02:52:20.422281",
  "pipeline_version": "2.0"
}

🔧 Raw API Response

{
  "type": "tweet",
  "id": "2004483098235343338",
  "url": "https://x.com/jiqizhixin/status/2004483098235343338",
  "twitterUrl": "https://twitter.com/jiqizhixin/status/2004483098235343338",
  "text": "LeCun's JEPA has evolved into a vision-language model, with 1.6B parameters rivaling the 72B Qwen-VL.\n\nInstead of predicting words directly, the proposed VL-JEPA learns to predict the core \"meaning\" of a text in an abstract space, ignoring surface-level wording variations.\n\nThis method outperforms standard token-based training with 50% fewer parameters. It beats models like CLIP & SigLIP2 on video classification/retrieval tasks and matches larger VLMs on VQA, while using a decoder only when needed to cut decoding ops by nearly 3x.\n\nVL-JEPA: Joint Embedding Predictive Architecture for Vision-language\n\nPaper: https://t.co/rGglBXvKex\n\nOur report: https://t.co/TXEHRquSBr",
  "source": "Twitter for iPhone",
  "retweetCount": 148,
  "replyCount": 24,
  "likeCount": 1160,
  "quoteCount": 13,
  "viewCount": 107825,
  "createdAt": "Fri Dec 26 09:23:06 +0000 2025",
  "lang": "en",
  "bookmarkCount": 915,
  "isReply": false,
  "inReplyToId": null,
  "conversationId": "2004483098235343338",
  "displayTextRange": [
    0,
    280
  ],
  "inReplyToUserId": null,
  "inReplyToUsername": null,
  "author": {
    "type": "user",
    "userName": "jiqizhixin",
    "url": "https://x.com/jiqizhixin",
    "twitterUrl": "https://twitter.com/jiqizhixin",
    "id": "819861340294524928",
    "name": "机器之心 JIQIZHIXIN",
    "isVerified": false,
    "isBlueVerified": true,
    "verifiedType": null,
    "profilePicture": "https://pbs.twimg.com/profile_images/895029477192851456/7dh0KKva_normal.jpg",
    "coverPicture": "",
    "description": "China's leading media & information provider for #AI & #MachineLearning",
    "location": "Beijing, China",
    "followers": 14035,
    "following": 801,
    "status": "",
    "canDm": false,
    "canMediaTag": true,
    "createdAt": "Fri Jan 13 10:59:10 +0000 2017",
    "entities": {
      "description": {
        "urls": []
      },
      "url": {
        "urls": [
          {
            "display_url": "jiqizhixin.com",
            "expanded_url": "http://www.jiqizhixin.com",
            "url": "https://t.co/Ap70A5wYgg",
            "indices": [
              0,
              23
            ]
          }
        ]
      }
    },
    "fastFollowersCount": 0,
    "favouritesCount": 4360,
    "hasCustomTimelines": false,
    "isTranslator": false,
    "mediaCount": 3274,
    "statusesCount": 8867,
    "withheldInCountries": [],
    "affiliatesHighlightedLabel": {},
    "possiblySensitive": false,
    "pinnedTweetIds": [
      "1952884324120043869"
    ],
    "profile_bio": {
      "description": "China's leading media & information provider for #AI & #MachineLearning"
    },
    "isAutomated": false,
    "automatedBy": null
  },
  "extendedEntities": {
    "media": [
      {
        "display_url": "pic.x.com/oPkRoxSmo3",
        "expanded_url": "https://x.com/jiqizhixin/status/2004483098235343338/photo/1",
        "id_str": "2004483075473059840",
        "indices": [
          281,
          304
        ],
        "media_key": "3_2004483075473059840",
        "media_url_https": "https://pbs.twimg.com/media/G9FavKfasAAhNaT.jpg",
        "type": "photo",
        "url": "https://t.co/oPkRoxSmo3",
        "ext_media_availability": {
          "status": "Available"
        },
        "features": {
          "large": {
            "faces": []
          },
          "medium": {
            "faces": []
          },
          "small": {
            "faces": []
          },
          "orig": {
            "faces": []
          }
        },
        "sizes": {
          "large": {
            "h": 447,
            "w": 1390,
            "resize": "fit"
          },
          "medium": {
            "h": 386,
            "w": 1200,
            "resize": "fit"
          },
          "small": {
            "h": 219,
            "w": 680,
            "resize": "fit"
          },
          "thumb": {
            "h": 150,
            "w": 150,
            "resize": "crop"
          }
        },
        "original_info": {
          "height": 447,
          "width": 1390,
          "focus_rects": [
            {
              "x": 592,
              "y": 0,
              "w": 798,
              "h": 447
            },
            {
              "x": 853,
              "y": 0,
              "w": 447,
              "h": 447
            },
            {
              "x": 880,
              "y": 0,
              "w": 392,
              "h": 447
            },
            {
              "x": 964,
              "y": 0,
              "w": 224,
              "h": 447
            },
            {
              "x": 0,
              "y": 0,
              "w": 1390,
              "h": 447
            }
          ]
        },
        "allow_download_status": {
          "allow_download": true
        },
        "media_results": {
          "result": {
            "media_key": "3_2004483075473059840"
          }
        }
      }
    ]
  },
  "card": null,
  "place": {},
  "entities": {
    "hashtags": [],
    "symbols": [],
    "urls": [
      {
        "display_url": "arxiv.org/abs/2512.10942",
        "expanded_url": "https://arxiv.org/abs/2512.10942",
        "url": "https://t.co/rGglBXvKex",
        "indices": [
          615,
          638
        ]
      },
      {
        "display_url": "mp.weixin.qq.com/s/ah29v42DLUHn…",
        "expanded_url": "https://mp.weixin.qq.com/s/ah29v42DLUHnbYPYCSjDYQ",
        "url": "https://t.co/TXEHRquSBr",
        "indices": [
          652,
          675
        ]
      }
    ],
    "user_mentions": []
  },
  "quoted_tweet": null,
  "retweeted_tweet": null,
  "article": null
}