🐦 Twitter Post Details

Viewing enriched Twitter post

@_philschmid

Deploy your local GGUF models to the cloud with just one click. 🤯 Excited to share @huggingface Inference Endpoints now natively supports llama.cpp, enabling one-click deployment of your local models to the cloud (AWS/Azure/GCP) with an @OpenAI-compatible endpoint. 🤯 TL;DR: 💡 Optimized llama.cpp container for Hugging Face Inference Endpoints 🦙 Supports all popular open Models in GGUF format, like @AIatMeta Llama, @GoogleDeepMind Gemma, @MistralAI …. 📈 Seamless transition from local to cloud deployment 🛠️ OpenAI-compatible endpoint for easy integration 📚 Multi-cloud support (@awscloud, @Azure, @googlecloud) using GPUs 💰 Llama.cpp team directly benefits from deployments We're actively collaborating with @ggerganov and the llama.cpp team to improve this functionality. In the future, expect more features, broader hardware support, and improved performance. 🤝

🔧 Raw API Response

{
  "user": {
    "created_at": "2019-06-18T18:39:49.000Z",
    "default_profile_image": false,
    "description": "Tech Lead and LLMs at @huggingface 👨🏻‍💻 🤗  AWS ML Hero 🦸🏻 | Cloud & ML enthusiast | 📍Nuremberg | 🇩🇪 https://t.co/l1ppq3q3hk",
    "fast_followers_count": 0,
    "favourites_count": 5136,
    "followers_count": 27722,
    "friends_count": 820,
    "has_custom_timelines": false,
    "is_translator": false,
    "listed_count": 656,
    "location": "Nürnberg",
    "media_count": 999,
    "name": "Philipp Schmid",
    "normal_followers_count": 27722,
    "possibly_sensitive": false,
    "profile_banner_url": "https://pbs.twimg.com/profile_banners/1141052916570214400/1725456070",
    "profile_image_url_https": "https://pbs.twimg.com/profile_images/1831321531852496896/1yBZG884_normal.jpg",
    "screen_name": "_philschmid",
    "statuses_count": 3072,
    "translator_type": "none",
    "url": "https://t.co/8BDXIK6omb",
    "verified": true,
    "withheld_in_countries": [],
    "id_str": "1141052916570214400"
  },
  "id": "1843168204610273539",
  "conversation_id": "1843168204610273539",
  "full_text": "Deploy your local GGUF models to the cloud with just one click. 🤯 Excited to share @huggingface Inference Endpoints now natively supports llama.cpp, enabling one-click deployment of your local models to the cloud (AWS/Azure/GCP) with an @OpenAI-compatible endpoint. 🤯\n\nTL;DR:\n💡 Optimized llama.cpp container for Hugging Face Inference Endpoints\n🦙 Supports all popular open Models in GGUF format, like @AIatMeta Llama, @GoogleDeepMind Gemma, @MistralAI ….\n📈 Seamless transition from local to cloud deployment\n🛠️ OpenAI-compatible endpoint for easy integration\n📚 Multi-cloud support (@awscloud, @Azure, @googlecloud) using GPUs\n💰 Llama.cpp team directly benefits from deployments\n\nWe're actively collaborating with @ggerganov and the llama.cpp team to improve this functionality. In the future, expect more features, broader hardware support, and improved performance. 🤝",
  "reply_count": 1,
  "retweet_count": 12,
  "favorite_count": 66,
  "hashtags": [],
  "symbols": [],
  "user_mentions": [
    {
      "id_str": "778764142412984320",
      "name": "Hugging Face",
      "screen_name": "huggingface",
      "profile": "https://twitter.com/huggingface"
    },
    {
      "id_str": "4398626122",
      "name": "OpenAI",
      "screen_name": "OpenAI",
      "profile": "https://twitter.com/OpenAI"
    }
  ],
  "urls": [],
  "media": [
    {
      "media_url": "https://pbs.twimg.com/media/GZQ_tAbWgAAUQaW.jpg",
      "type": "photo"
    }
  ],
  "url": "https://twitter.com/_philschmid/status/1843168204610273539",
  "created_at": "2024-10-07T05:55:19.000Z",
  "#sort_index": "1843168204610273539",
  "view_count": 20129,
  "quote_count": 1,
  "is_quote_tweet": false,
  "is_retweet": false,
  "is_pinned": false,
  "is_truncated": true,
  "startUrl": "https://x.com/_philschmid/status/1843168204610273539"
}