🐦 Twitter Post Details

Viewing enriched Twitter post

@ipoupyrev

Introducing TimeFusion, our new multimodal foundation model that unlocks a unified language between humans and sensors. For decades, organizations had access to billions of sensor signals across industry, infrastructure, energy systems, and personal devices but almost none of that data has been easy to understand or act upon directly. TimeFusion changes that. That means you can *ask* a machine about vibration anomalies, *generate* new signals from a text description, or *forecast* what comes next — all in plain English. Here is how it works: 🔹 TimeFusion is a general sensor–language fusion model: a 2-billion-parameter transformer trained to ingest and produce both natural language and raw time-series signals in a single continuous framework. 🔹 Unlike previous approaches that compress sensor data into narrow text-only formats, TimeFusion uses Universal Tokens to combine time-series signals and language inside one shared vocabulary. This enables the model to truly understand physical data instead of translating it through various hacks — and to perform forecasting, anomaly detection, filtering, imputation, captioning, QA, and generation through one unified interface. 🔹 TimeFusion outperforms much larger models like GPT-5, Claude Sonnet, GLM 4.6 and others on sensor-related tasks despite being orders of magnitude smaller. And the model isn’t just translating signals into text. It can already do powerful text-to-signal transformations: forecasting the future of a waveform, reconstructing missing data, filtering noise, or reshaping a signal based on a natural-language prompt producing new signals rather then text as output. This opens the door to an entirely new category of interfaces with the physical world — where engineers, operators, doctors, city systems, and even consumers can converse with the machines and environments around them, instead of digging through raw numbers and graphs. A new way to talk to the physical world is here 🌍 #PhysicalAI

Media 1

📊 Media Metadata

{
  "media": [
    {
      "url": "https://crmoxkoizveukayfjuyo.supabase.co/storage/v1/object/public/media/posts/2002092589747666968/media_0.jpg?",
      "media_url": "https://crmoxkoizveukayfjuyo.supabase.co/storage/v1/object/public/media/posts/2002092589747666968/media_0.jpg?",
      "type": "photo",
      "filename": "media_0.jpg"
    }
  ],
  "processed_at": "2025-12-22T23:03:30.053693",
  "pipeline_version": "2.0"
}

🔧 Raw API Response

{
  "type": "tweet",
  "id": "2002092589747666968",
  "url": "https://x.com/ipoupyrev/status/2002092589747666968",
  "twitterUrl": "https://twitter.com/ipoupyrev/status/2002092589747666968",
  "text": "Introducing TimeFusion, our new multimodal foundation model that unlocks a unified language between humans and sensors.\n\nFor decades, organizations had access to billions of sensor signals across industry, infrastructure, energy systems, and personal devices but almost none of that data has been easy to understand or act upon directly.\n\nTimeFusion changes that. That means you can *ask* a machine about vibration anomalies, *generate* new signals from a text description, or *forecast* what comes next — all in plain English.\n\nHere is how it works:\n\n🔹 TimeFusion is a general sensor–language fusion model: a 2-billion-parameter transformer trained to ingest and produce both natural language and raw time-series signals in a single continuous framework.\n\n🔹 Unlike previous approaches that compress sensor data into narrow text-only formats, TimeFusion uses Universal Tokens to combine time-series signals and language inside one shared vocabulary. This enables the model to truly understand physical data instead of translating it through various hacks — and to perform forecasting, anomaly detection, filtering, imputation, captioning, QA, and generation through one unified interface.\n\n🔹 TimeFusion outperforms much larger models like GPT-5, Claude Sonnet, GLM 4.6 and others on sensor-related tasks despite being orders of magnitude smaller.\n\nAnd the model isn’t just translating signals into text.\n\nIt can already do powerful text-to-signal transformations: forecasting the future of a waveform, reconstructing missing data, filtering noise, or reshaping a signal based on a natural-language prompt producing new signals rather then text as output.\n\nThis opens the door to an entirely new category of interfaces with the physical world — where engineers, operators, doctors, city systems, and even consumers can converse with the machines and environments around them, instead of digging through raw numbers and graphs.\n\nA new way to talk to the physical world is here 🌍 #PhysicalAI",
  "source": "Twitter for iPhone",
  "retweetCount": 4,
  "replyCount": 1,
  "likeCount": 19,
  "quoteCount": 1,
  "viewCount": 2097,
  "createdAt": "Fri Dec 19 19:04:05 +0000 2025",
  "lang": "en",
  "bookmarkCount": 12,
  "isReply": false,
  "inReplyToId": null,
  "conversationId": "2002092589747666968",
  "displayTextRange": [
    0,
    278
  ],
  "inReplyToUserId": null,
  "inReplyToUsername": null,
  "author": {
    "type": "user",
    "userName": "ipoupyrev",
    "url": "https://x.com/ipoupyrev",
    "twitterUrl": "https://twitter.com/ipoupyrev",
    "id": "364927618",
    "name": "Ivan Poupyrev",
    "isVerified": false,
    "isBlueVerified": true,
    "verifiedType": null,
    "profilePicture": "https://pbs.twimg.com/profile_images/1120190579906793474/yoacUsL6_normal.png",
    "coverPicture": "https://pbs.twimg.com/profile_banners/364927618/1697212002",
    "description": "Founder, CEO at @PhysicalAI. Tech leader and executive, interaction designer, scientist. Google, Disney, Sony before. 2019 National Design Award. TED speaker.",
    "location": "San Francisco & Bay Area, USA",
    "followers": 4711,
    "following": 311,
    "status": "",
    "canDm": false,
    "canMediaTag": true,
    "createdAt": "Tue Aug 30 15:43:57 +0000 2011",
    "entities": {
      "description": {
        "urls": []
      },
      "url": {
        "urls": [
          {
            "display_url": "archetypeai.io",
            "expanded_url": "https://www.archetypeai.io/",
            "url": "https://t.co/3VMCDqX5fA",
            "indices": [
              0,
              23
            ]
          }
        ]
      }
    },
    "fastFollowersCount": 0,
    "favouritesCount": 1750,
    "hasCustomTimelines": false,
    "isTranslator": false,
    "mediaCount": 575,
    "statusesCount": 2927,
    "withheldInCountries": [],
    "affiliatesHighlightedLabel": {},
    "possiblySensitive": false,
    "pinnedTweetIds": [],
    "profile_bio": {
      "description": "Founder, CEO at @PhysicalAI. Tech leader and executive, interaction designer, scientist. Google, Disney, Sony before. 2019 National Design Award. TED speaker."
    },
    "isAutomated": false,
    "automatedBy": null
  },
  "extendedEntities": {
    "media": [
      {
        "display_url": "pic.x.com/nl79ena6VJ",
        "expanded_url": "https://x.com/ipoupyrev/status/2002092589747666968/photo/1",
        "id_str": "2002092583120744450",
        "indices": [
          279,
          302
        ],
        "media_key": "3_2002092583120744450",
        "media_url_https": "https://pbs.twimg.com/media/G8jcmMhbMAI-A8e.jpg",
        "type": "photo",
        "url": "https://t.co/nl79ena6VJ",
        "ext_media_availability": {
          "status": "Available"
        },
        "features": {
          "large": {
            "faces": []
          },
          "medium": {
            "faces": []
          },
          "small": {
            "faces": []
          },
          "orig": {
            "faces": []
          }
        },
        "sizes": {
          "large": {
            "h": 1324,
            "w": 2048,
            "resize": "fit"
          },
          "medium": {
            "h": 776,
            "w": 1200,
            "resize": "fit"
          },
          "small": {
            "h": 440,
            "w": 680,
            "resize": "fit"
          },
          "thumb": {
            "h": 150,
            "w": 150,
            "resize": "crop"
          }
        },
        "original_info": {
          "height": 2892,
          "width": 4473,
          "focus_rects": [
            {
              "x": 0,
              "y": 0,
              "w": 4473,
              "h": 2505
            },
            {
              "x": 790,
              "y": 0,
              "w": 2892,
              "h": 2892
            },
            {
              "x": 968,
              "y": 0,
              "w": 2537,
              "h": 2892
            },
            {
              "x": 1513,
              "y": 0,
              "w": 1446,
              "h": 2892
            },
            {
              "x": 0,
              "y": 0,
              "w": 4473,
              "h": 2892
            }
          ]
        },
        "media_results": {
          "result": {
            "media_key": "3_2002092583120744450"
          }
        }
      }
    ]
  },
  "card": null,
  "place": {},
  "entities": {
    "hashtags": [
      {
        "indices": [
          1977,
          1988
        ],
        "text": "PhysicalAI"
      }
    ],
    "symbols": [],
    "urls": [],
    "user_mentions": []
  },
  "quoted_tweet": null,
  "retweeted_tweet": null,
  "article": null
}