🐦 Twitter Post Details

Viewing enriched Twitter post

@XuCanwen

šŸ“‘ Contrastive Post-training Large Language Models on Data Curriculum šŸ‘‰ https://t.co/NdWtoyNcrw šŸŒ— Different models can be used for contrastive training LLM šŸš€ LLMs can be improved by learning the nuances between a strong model and a weaker one 🐳 Scale-up experiments on Orca https://t.co/e8Lvwp2L1M

Media 1

šŸ“Š Media Metadata

{
  "media": [
    {
      "url": "https://crmoxkoizveukayfjuyo.supabase.co/storage/v1/object/public/media/posts/1709420017950114248/media_0.jpg",
      "type": "photo",
      "original_url": "https://pbs.twimg.com/media/F7kQk4IakAAOhMI.jpg",
      "download_date": "2025-08-13T06:01:52.457004",
      "stored_in_supabase": true
    }
  ],
  "conversion_date": "2025-08-13T00:39:51.179142",
  "format_converted": true,
  "original_structure": "had_media_only"
}

šŸ”§ Raw API Response

{
  "user": {
    "created_at": "2017-03-23T00:18:33.000Z",
    "default_profile_image": false,
    "description": "PhD candidate @UCSanDiego šŸ„; Intern @Microsoft; Formerly @GoogleAI @huggingface šŸ¤—. RT & like ≠ endorsements. Views are my own. He/him",
    "fast_followers_count": 0,
    "favourites_count": 766,
    "followers_count": 1790,
    "friends_count": 391,
    "has_custom_timelines": false,
    "is_translator": false,
    "listed_count": 52,
    "location": "",
    "media_count": 54,
    "name": "Canwen Xu",
    "normal_followers_count": 1790,
    "possibly_sensitive": false,
    "profile_banner_url": "https://pbs.twimg.com/profile_banners/844704885870333952/1632517417",
    "profile_image_url_https": "https://pbs.twimg.com/profile_images/1150301652294881280/WT_PYn4f_normal.jpg",
    "screen_name": "XuCanwen",
    "statuses_count": 495,
    "translator_type": "none",
    "url": "https://t.co/wFMNo27IQD",
    "verified": false,
    "withheld_in_countries": [],
    "id_str": "844704885870333952"
  },
  "id": "1709420017950114248",
  "conversation_id": "1709420017950114248",
  "full_text": "šŸ“‘ Contrastive Post-training Large Language Models on Data Curriculum\nšŸ‘‰ https://t.co/NdWtoyNcrw\n\nšŸŒ— Different models can be used for contrastive training LLM\nšŸš€ LLMs can be improved by learning the nuances between a strong model and a weaker one\n🐳 Scale-up experiments on Orca https://t.co/e8Lvwp2L1M",
  "reply_count": 1,
  "retweet_count": 13,
  "favorite_count": 82,
  "hashtags": [],
  "symbols": [],
  "user_mentions": [],
  "urls": [
    {
      "url": "https://t.co/NdWtoyNcrw",
      "expanded_url": "https://arxiv.org/abs/2310.02263",
      "display_url": "arxiv.org/abs/2310.02263"
    }
  ],
  "media": [
    {
      "media_url": "https://pbs.twimg.com/media/F7kQk4IakAAOhMI.jpg",
      "type": "photo"
    }
  ],
  "url": "https://twitter.com/XuCanwen/status/1709420017950114248",
  "created_at": "2023-10-04T04:07:46.000Z",
  "#sort_index": "1709420017950114248",
  "view_count": 17514,
  "quote_count": 2,
  "is_quote_tweet": false,
  "is_retweet": false,
  "is_pinned": false,
  "is_truncated": false,
  "startUrl": "https://twitter.com/xucanwen/status/1709420017950114248"
}