🐦 Twitter Post Details

Viewing enriched Twitter post

@codezakh

Can we automate the process of generating data to improve a model on diverse, open-ended tasks, based on automatically-discovered model weaknesses? Introducing DataEnvGym - a testbed for data-generation agents + teaching environments. Environment trains/evaluates student model ➡️ Environment discovers skills/errors and gives feedback to agent ➡️ Agent generates updated training data to address weaknesses ➡️ Iterate Key Idea -- Frame data generation + model improvement as an RL-style sequential decision-making task: states encode student errors, policy decides actions encoding which data to generate, and reward is the performance of the student model. We provide several modular environments + teaching agents that can improve models on VQA/math/programming, and provide a leaderboard benchmarking these agents. We welcome more entries to our leaderboard! Thread 🧵👇 (1/9)

🔧 Raw API Response

{
  "user": {
    "created_at": "2023-06-17T04:32:28.000Z",
    "default_profile_image": false,
    "description": "@uncnlp with @mohitban47 working on grounded reasoning + multimodal agents // currently @allen_ai formerly @neclabsamerica // bs+ms CompE @northeastern",
    "fast_followers_count": 0,
    "favourites_count": 541,
    "followers_count": 373,
    "friends_count": 534,
    "has_custom_timelines": false,
    "is_translator": false,
    "listed_count": 3,
    "location": "Boston, USA",
    "media_count": 10,
    "name": "Zaid Khan",
    "normal_followers_count": 373,
    "possibly_sensitive": false,
    "profile_image_url_https": "https://pbs.twimg.com/profile_images/1669927321610977280/01Vm6UQo_normal.jpg",
    "screen_name": "codezakh",
    "statuses_count": 253,
    "translator_type": "none",
    "url": "https://t.co/7o8ktayQaX",
    "verified": true,
    "withheld_in_countries": [],
    "id_str": "1669925833891356673"
  },
  "id": "1844064267999531151",
  "conversation_id": "1844064267999531151",
  "full_text": "Can we automate the process of generating data to improve a model on diverse, open-ended tasks, based on automatically-discovered model weaknesses?\n\nIntroducing DataEnvGym - a testbed for data-generation agents + teaching environments. \n\nEnvironment trains/evaluates student model ➡️ Environment discovers skills/errors and gives feedback to agent ➡️  Agent generates updated training data to address weaknesses  ➡️  Iterate\n\nKey Idea -- Frame data generation + model improvement as an RL-style sequential decision-making task: states encode student errors, policy decides actions encoding which data to generate, and reward is the performance of the student model.\n\nWe provide several modular environments + teaching agents that can improve models on VQA/math/programming, and provide a leaderboard benchmarking these agents. We welcome more entries to our leaderboard!\n\nThread 🧵👇 (1/9)",
  "reply_count": 2,
  "retweet_count": 89,
  "favorite_count": 275,
  "hashtags": [],
  "symbols": [],
  "user_mentions": [],
  "urls": [],
  "media": [
    {
      "media_url": "https://pbs.twimg.com/media/GZduAOXaAAE-gJz.jpg",
      "type": "photo"
    }
  ],
  "url": "https://twitter.com/codezakh/status/1844064267999531151",
  "created_at": "2024-10-09T17:15:57.000Z",
  "#sort_index": "1844064267999531151",
  "view_count": 50277,
  "quote_count": 6,
  "is_quote_tweet": false,
  "is_retweet": false,
  "is_pinned": false,
  "is_truncated": true,
  "startUrl": "https://x.com/codezakh/status/1844064267999531151"
}