🐦 Twitter Post Details

Viewing enriched Twitter post

@AlphaSignalAI

The highly expected Mistral 7B is out. Yes, from the french startup with a $113M seed round. The model already outperforms Llama 2 13B on every benchmark. Features - Released under Apache 2.0 licence. - Superior to LLaMA 1 34B in code, math, and reasoning - Approaches CodeLlama 7B performance on code Usability - Usable anywhere (even locally) - Deployable on any cloud (AWS/GCP/Azure) - Usable on HuggingFace Architecture - Uses Grouped-query attention (GQA) for faster inference -Uses Sliding Window Attention (SWA) to handle longer sequences at smaller cost https://t.co/vrtvl0kIpX

πŸ”§ Raw API Response

{
  "user": {
    "created_at": "2012-11-07T07:19:36.000Z",
    "default_profile_image": false,
    "description": "Covering the latest breakthroughs in AI β€’ ML Engineer building AlphaSignal β†’ A technical newsletter read by 130,000+ industry experts.",
    "fast_followers_count": 0,
    "favourites_count": 4279,
    "followers_count": 68082,
    "friends_count": 728,
    "has_custom_timelines": true,
    "is_translator": false,
    "listed_count": 1468,
    "location": "You should join β†’",
    "media_count": 388,
    "name": "Lior⚑",
    "normal_followers_count": 68082,
    "possibly_sensitive": false,
    "profile_banner_url": "https://pbs.twimg.com/profile_banners/931470139/1681303371",
    "profile_image_url_https": "https://pbs.twimg.com/profile_images/1599792074336964608/CobSHV8l_normal.jpg",
    "screen_name": "AlphaSignalAI",
    "statuses_count": 2493,
    "translator_type": "none",
    "url": "https://t.co/AyubevadmD",
    "verified": false,
    "withheld_in_countries": [],
    "id_str": "931470139"
  },
  "id": "1707096129555337375",
  "conversation_id": "1707096129555337375",
  "full_text": "The highly expected Mistral 7B is out.\n\nYes, from the french startup with a $113M seed round.\n\nThe model already outperforms Llama 2 13B on every benchmark.\n\nFeatures\n- Released under Apache 2.0 licence. \n- Superior to LLaMA 1 34B in code, math, and reasoning\n- Approaches CodeLlama 7B performance on code\n\nUsability\n- Usable anywhere (even locally) \n- Deployable on any cloud (AWS/GCP/Azure)\n- Usable on HuggingFace\n\nArchitecture\n- Uses Grouped-query attention (GQA) for faster inference\n-Uses Sliding Window Attention (SWA) to handle longer sequences at smaller cost\n\nhttps://t.co/vrtvl0kIpX",
  "reply_count": 10,
  "retweet_count": 59,
  "favorite_count": 267,
  "hashtags": [],
  "symbols": [],
  "user_mentions": [],
  "urls": [],
  "media": [
    {
      "media_url": "https://pbs.twimg.com/media/F7DSsh-XAAAwX7u.jpg",
      "type": "photo"
    }
  ],
  "url": "https://twitter.com/AlphaSignalAI/status/1707096129555337375",
  "created_at": "2023-09-27T18:13:28.000Z",
  "#sort_index": "1707096129555337375",
  "view_count": 38478,
  "quote_count": 4,
  "is_quote_tweet": false,
  "is_retweet": false,
  "is_pinned": false,
  "is_truncated": true,
  "startUrl": "https://twitter.com/alphasignalai/status/1707096129555337375"
}