🐦 Twitter Post Details

Viewing enriched Twitter post

@miguelgfierro

The rate of innovation we are seeing in LLMs is mindblowing. Here is a study comparing a fine-tuned Llama-2 with GPT4. Some takeaways: - Fine-tuned Llama-2 outperforms GPT4 in SQL and unstructured data understanding. - GPT4 outperforms fine-tuned Llama-2 in math reasoning. This result is very interesting, and it shows the potential of OSS models. However, let's not forget that we are not comparing apples to apples. Llama-2 here is a fine-tuned model, and GPT4 is a zero-shot model. A few months ago, it was unimaginable to think that a zero-shot model would outperform a fine-tuned model. Now we are making benchmarks of the opposite. We are living in exponential times. Details here: https://t.co/gqGWZa2Rd3 ____ #AI #datascience #machinelearning #LLM

🔧 Raw API Response

{
  "user": {
    "created_at": "2010-10-26T06:02:13.000Z",
    "default_profile_image": false,
    "description": "60k+@LinkedIn · AI@Microsoft · I help people understand and apply AI",
    "fast_followers_count": 0,
    "favourites_count": 1972,
    "followers_count": 1035,
    "friends_count": 14,
    "has_custom_timelines": true,
    "is_translator": false,
    "listed_count": 146,
    "location": "Madrid&London",
    "media_count": 747,
    "name": "Miguel Fierro",
    "normal_followers_count": 1035,
    "possibly_sensitive": false,
    "profile_banner_url": "https://pbs.twimg.com/profile_banners/207872592/1350213977",
    "profile_image_url_https": "https://pbs.twimg.com/profile_images/1295385031779778560/5uenSIg0_normal.jpg",
    "screen_name": "miguelgfierro",
    "statuses_count": 7502,
    "translator_type": "none",
    "url": "https://t.co/27fsbzjebA",
    "verified": false,
    "withheld_in_countries": [],
    "id_str": "207872592"
  },
  "id": "1696035525226901705",
  "conversation_id": "1696035525226901705",
  "full_text": "The rate of innovation we are seeing in LLMs is mindblowing.\n\nHere is a study comparing a fine-tuned Llama-2 with GPT4.\n\nSome takeaways:\n\n- Fine-tuned Llama-2 outperforms GPT4 in SQL and unstructured data understanding.\n- GPT4 outperforms fine-tuned Llama-2 in math reasoning.\n\nThis result is very interesting, and it shows the potential of OSS models.\n\nHowever, let's not forget that we are not comparing apples to apples.\n\nLlama-2 here is a fine-tuned model, and GPT4 is a zero-shot model.\n\nA few months ago, it was unimaginable to think that a zero-shot model would outperform a fine-tuned model. Now we are making benchmarks of the opposite.\n\nWe are living in exponential times.\n\nDetails here: https://t.co/gqGWZa2Rd3\n____\n#AI #datascience #machinelearning #LLM",
  "reply_count": 5,
  "retweet_count": 9,
  "favorite_count": 59,
  "hashtags": [],
  "symbols": [],
  "user_mentions": [],
  "urls": [],
  "media": [
    {
      "media_url": "https://pbs.twimg.com/media/F4mHLOVXkAARWpx.jpg",
      "type": "photo"
    }
  ],
  "url": "https://twitter.com/miguelgfierro/status/1696035525226901705",
  "created_at": "2023-08-28T05:42:34.000Z",
  "#sort_index": "1696035525226901705",
  "view_count": 30982,
  "quote_count": 5,
  "is_quote_tweet": false,
  "is_retweet": false,
  "is_pinned": false,
  "is_truncated": true,
  "startUrl": "https://twitter.com/miguelgfierro/status/1696035525226901705"
}