🐦 Twitter Post Details

Viewing enriched Twitter post

@iScienceLuvr

Large Language Models Are Zero-Shot Time Series Forecasters abs: https://t.co/aVDZ1sD4FT code: https://t.co/1AwskkzjrS Introduces LLMTime, a simple method to apply pretrained LLMs for continuous time series prediction problems. Main trick is to ensure each digit is tokenized (by adding spaces between digits for example). Their approach obtains SOTA on various benchmarks.

🔧 Raw API Response

{
  "user": {
    "created_at": "2011-12-20T03:45:50.000Z",
    "default_profile_image": false,
    "description": "PhD at 19 |\nFounder and CEO at @MedARC_AI |\nResearch Director at @StabilityAI | \n@kaggle Notebooks GM |\nBiomed. engineer @ 14 |\nTEDx talk➡https://t.co/DwMkst4bnG",
    "fast_followers_count": 0,
    "favourites_count": 60004,
    "followers_count": 45437,
    "friends_count": 995,
    "has_custom_timelines": true,
    "is_translator": false,
    "listed_count": 703,
    "location": "",
    "media_count": 1203,
    "name": "Tanishq Mathew Abraham, PhD",
    "normal_followers_count": 45437,
    "possibly_sensitive": false,
    "profile_banner_url": "https://pbs.twimg.com/profile_banners/441465751/1675968078",
    "profile_image_url_https": "https://pbs.twimg.com/profile_images/1553508977735962624/nnlSwBmu_normal.jpg",
    "screen_name": "iScienceLuvr",
    "statuses_count": 12087,
    "translator_type": "none",
    "url": "https://t.co/nNzCz2VVd1",
    "verified": false,
    "withheld_in_countries": [],
    "id_str": "441465751"
  },
  "id": "1712739848141013061",
  "conversation_id": "1712739848141013061",
  "full_text": "Large Language Models Are Zero-Shot Time Series Forecasters\n\nabs: https://t.co/aVDZ1sD4FT\ncode: https://t.co/1AwskkzjrS\n\nIntroduces LLMTime, a simple method to apply pretrained LLMs for continuous time series prediction problems. Main trick is to ensure each digit is tokenized (by adding spaces between digits for example). Their approach obtains SOTA on various benchmarks.",
  "reply_count": 2,
  "retweet_count": 64,
  "favorite_count": 346,
  "hashtags": [],
  "symbols": [],
  "user_mentions": [],
  "urls": [
    {
      "url": "https://t.co/4BE2rWRxye",
      "expanded_url": "https://arxiv.org/abs/2310.07820",
      "display_url": "arxiv.org/abs/2310.07820"
    },
    {
      "url": "https://t.co/1DYZoovQka",
      "expanded_url": "https://github.com/ngruver/llmtime",
      "display_url": "github.com/ngruver/llmtime"
    }
  ],
  "media": [
    {
      "media_url": "https://pbs.twimg.com/media/F8TeNLcbYAA6YbR.jpg",
      "type": "photo"
    }
  ],
  "url": "https://twitter.com/iScienceLuvr/status/1712739848141013061",
  "created_at": "2023-10-13T07:59:35.000Z",
  "#sort_index": "1712739848141013061",
  "view_count": 60145,
  "quote_count": 4,
  "is_quote_tweet": false,
  "is_retweet": false,
  "is_pinned": false,
  "is_truncated": true,
  "startUrl": "https://twitter.com/iscienceluvr/status/1712739848141013061"
}