🐦 Twitter Post Details

Viewing enriched Twitter post

@kenziyuliu

Sharing a super simple, user-owned memory module we've been playing around: nanomem The basic idea is to treat memory as a pure intelligence problem: ingestion, structuring, and (selective) retrieval are all just LLM calls & agent loops on a on-device markdown file tree. Each file lists a set of facts w/ metadata (timestamp, confidence, source, etc.); no embeddings/RAG/training of any kind. For example: - `nanomem add <fact>` starts an agent loop to walk the tree, read relevant files, and edit. - `nanomem retrieve <query>` walks the tree and returns a single summary string (possibly assembled from many subtrees) related to the query. What’s nice about this approach is that the memory system is, by construction: 1. partitionable (human/agents can easily separate `hobbies/snowboard.md` from `tax/residency.md` for data minimization + relevance) 2. portable and user-owned (it’s just text files) 3. interpretable (you know exactly what’s written and you can manually edit) 4. forward-compatible (future models can read memory files just the same, and memory quality/speed improves as models get better) 5. modularized (you can optimize ingestion/retrieval/compaction prompts separately) Privacy & utility. I'm most excited about the ability to partition + selectively disclose memory at inference-time. Selective disclosure helps with both privacy (principle of least privilege & “need-to-know”) and utility (as too much context for a query can harm answer quality). Composability. An inference-time memory module means: (1) you can run such a module with confidential inference (LLMs on TEEs) for provable privacy, and (2) you can selectively disclose context over unlinkable inference of remote models (demo below). We built nanomem as part of the Open Anonymity project (https://t.co/fO14l5hRkp), but it’s meant to be a standalone module for humans and agents (e.g., you can write a SKILL for using the CLI tool). Still polishing the rough edges! - GitHub (MIT): https://t.co/YYDCk5sIzc - Blog: https://t.co/pexZTFdWzz - Beta implementation in chat client soon: https://t.co/rsMjL3wzKQ Work done with amazing project co-leads @amelia_kuang @cocozxu @erikchi !!

Media 2

📊 Media Metadata

{
  "media": [
    {
      "url": "https://crmoxkoizveukayfjuyo.supabase.co/storage/v1/object/public/media/posts/2044479111041794496/media_0.mp4",
      "media_url": "https://crmoxkoizveukayfjuyo.supabase.co/storage/v1/object/public/media/posts/2044479111041794496/media_0.mp4",
      "type": "video",
      "filename": "media_0.mp4"
    },
    {
      "url": "https://crmoxkoizveukayfjuyo.supabase.co/storage/v1/object/public/media/posts/2044479111041794496/media_1.jpg",
      "media_url": "https://crmoxkoizveukayfjuyo.supabase.co/storage/v1/object/public/media/posts/2044479111041794496/media_1.jpg",
      "type": "photo",
      "filename": "media_1.jpg"
    }
  ],
  "processed_at": "2026-04-17T04:04:42.260976",
  "pipeline_version": "2.0"
}

🔧 Raw API Response

{
  "type": "tweet",
  "id": "2044479111041794496",
  "url": "https://x.com/kenziyuliu/status/2044479111041794496",
  "twitterUrl": "https://twitter.com/kenziyuliu/status/2044479111041794496",
  "text": "Sharing a super simple, user-owned memory module we've been playing around: nanomem\n\nThe basic idea is to treat memory as a pure intelligence problem: ingestion, structuring, and (selective) retrieval are all just LLM calls & agent loops on a on-device markdown file tree. Each file lists a set of facts w/ metadata (timestamp, confidence, source, etc.); no embeddings/RAG/training of any kind.\n\nFor example: \n- `nanomem add <fact>` starts an agent loop to walk the tree, read relevant files, and edit.\n- `nanomem retrieve <query>` walks the tree and returns a single summary string (possibly assembled from many subtrees) related to the query.\n\nWhat’s nice about this approach is that the memory system is, by construction:\n\n1. partitionable (human/agents can easily separate `hobbies/snowboard.md` from `tax/residency.md` for data minimization + relevance)\n2. portable and user-owned (it’s just text files)\n3. interpretable (you know exactly what’s written and you can manually edit)\n4. forward-compatible (future models can read memory files just the same, and memory quality/speed improves as models get better)\n5. modularized (you can optimize ingestion/retrieval/compaction prompts separately)   \n\nPrivacy & utility. I'm most excited about the ability to partition + selectively disclose memory at inference-time. Selective disclosure helps with both privacy (principle of least privilege & “need-to-know”) and utility (as too much context for a query can harm answer quality).    \n\nComposability. An inference-time memory module means: (1) you can run such a module with confidential inference (LLMs on TEEs) for provable privacy, and (2) you can selectively disclose context over unlinkable inference of remote models (demo below).\n\nWe built nanomem as part of the Open Anonymity project (https://t.co/fO14l5hRkp), but it’s meant to be a standalone module for humans and agents (e.g., you can write a SKILL for using the CLI tool). Still polishing the rough edges!\n\n- GitHub (MIT): https://t.co/YYDCk5sIzc\n- Blog: https://t.co/pexZTFdWzz\n- Beta implementation in chat client soon: https://t.co/rsMjL3wzKQ   \n\nWork done with amazing project co-leads @amelia_kuang @cocozxu @erikchi !!",
  "source": "Twitter for iPhone",
  "retweetCount": 23,
  "replyCount": 4,
  "likeCount": 140,
  "quoteCount": 2,
  "viewCount": 12337,
  "createdAt": "Wed Apr 15 18:12:59 +0000 2026",
  "lang": "en",
  "bookmarkCount": 115,
  "isReply": false,
  "inReplyToId": null,
  "conversationId": "2044479111041794496",
  "displayTextRange": [
    0,
    281
  ],
  "inReplyToUserId": null,
  "inReplyToUsername": null,
  "author": {
    "type": "user",
    "userName": "kenziyuliu",
    "url": "https://x.com/kenziyuliu",
    "twitterUrl": "https://twitter.com/kenziyuliu",
    "id": "820474984225062912",
    "name": "Ken Liu",
    "isVerified": false,
    "isBlueVerified": true,
    "verifiedType": null,
    "profilePicture": "https://pbs.twimg.com/profile_images/1873465892044263424/5w8zM3eZ_normal.jpg",
    "coverPicture": "https://pbs.twimg.com/profile_banners/820474984225062912/1678168272",
    "description": "",
    "location": "🌲",
    "followers": 3347,
    "following": 950,
    "status": "",
    "canDm": true,
    "canMediaTag": false,
    "createdAt": "Sun Jan 15 03:37:34 +0000 2017",
    "entities": {
      "description": {
        "urls": []
      },
      "url": {}
    },
    "fastFollowersCount": 0,
    "favouritesCount": 7941,
    "hasCustomTimelines": true,
    "isTranslator": false,
    "mediaCount": 53,
    "statusesCount": 652,
    "withheldInCountries": [],
    "affiliatesHighlightedLabel": {},
    "possiblySensitive": false,
    "pinnedTweetIds": [
      "2027145028737663259"
    ],
    "profile_bio": {
      "description": "CS PhD @StanfordAILab @StanfordNLP w/ @percyliang @sanmikoyejo. Working on AI, privacy/security, and often both.\nPast: @GoogleDeepMind, CMU, USydney 🇦🇺",
      "entities": {
        "description": {
          "hashtags": [],
          "symbols": [],
          "urls": [],
          "user_mentions": [
            {
              "id_str": "0",
              "indices": [
                7,
                21
              ],
              "name": "",
              "screen_name": "StanfordAILab"
            },
            {
              "id_str": "0",
              "indices": [
                22,
                34
              ],
              "name": "",
              "screen_name": "StanfordNLP"
            },
            {
              "id_str": "0",
              "indices": [
                38,
                49
              ],
              "name": "",
              "screen_name": "percyliang"
            },
            {
              "id_str": "0",
              "indices": [
                50,
                62
              ],
              "name": "",
              "screen_name": "sanmikoyejo"
            },
            {
              "id_str": "0",
              "indices": [
                119,
                134
              ],
              "name": "",
              "screen_name": "GoogleDeepMind"
            }
          ]
        },
        "url": {
          "urls": [
            {
              "display_url": "ai.stanford.edu/~kzliu",
              "expanded_url": "https://ai.stanford.edu/~kzliu",
              "indices": [
                0,
                23
              ],
              "url": "https://t.co/EmoOoQ749G"
            }
          ]
        }
      }
    },
    "isAutomated": false,
    "automatedBy": null
  },
  "extendedEntities": {
    "media": [
      {
        "additional_media_info": {
          "monetizable": true
        },
        "allow_download_status": {
          "allow_download": true
        },
        "display_url": "pic.twitter.com/kZkObFSOjX",
        "expanded_url": "https://twitter.com/kenziyuliu/status/2044479111041794496/video/1",
        "ext_media_availability": {
          "status": "Available"
        },
        "id_str": "2044479052116062208",
        "indices": [
          282,
          305
        ],
        "media_key": "13_2044479052116062208",
        "media_results": {
          "id": "QXBpTWVkaWFSZXN1bHRzOgwABAoAARxfct2kGvAAAAA=",
          "result": {
            "__typename": "ApiMedia",
            "id": "QXBpTWVkaWE6DAAECgABHF9y3aQa8AAAAA==",
            "media_key": "13_2044479052116062208"
          }
        },
        "media_url_https": "https://pbs.twimg.com/amplify_video_thumb/2044479052116062208/img/f1y1R7S68-U_kPBM.jpg",
        "original_info": {
          "focus_rects": [],
          "height": 1082,
          "width": 1080
        },
        "sizes": {
          "large": {
            "h": 1082,
            "w": 1080
          }
        },
        "type": "video",
        "url": "https://t.co/kZkObFSOjX",
        "video_info": {
          "aspect_ratio": [
            540,
            541
          ],
          "duration_millis": 29875,
          "variants": [
            {
              "content_type": "application/x-mpegURL",
              "url": "https://video.twimg.com/amplify_video/2044479052116062208/pl/D0f7RS3ST0FuUQsP.m3u8?tag=21"
            },
            {
              "bitrate": 632000,
              "content_type": "video/mp4",
              "url": "https://video.twimg.com/amplify_video/2044479052116062208/vid/avc1/320x320/yDl1oHKkS9CyF_ge.mp4?tag=21"
            },
            {
              "bitrate": 950000,
              "content_type": "video/mp4",
              "url": "https://video.twimg.com/amplify_video/2044479052116062208/vid/avc1/480x480/EAOUKnZcfLPdSkmW.mp4?tag=21"
            },
            {
              "bitrate": 2176000,
              "content_type": "video/mp4",
              "url": "https://video.twimg.com/amplify_video/2044479052116062208/vid/avc1/720x720/T2IUCKADEr7E_flh.mp4?tag=21"
            },
            {
              "bitrate": 10368000,
              "content_type": "video/mp4",
              "url": "https://video.twimg.com/amplify_video/2044479052116062208/vid/avc1/1080x1082/C0SOpusFlwG4d47_.mp4?tag=21"
            }
          ]
        }
      }
    ]
  },
  "card": null,
  "place": {},
  "entities": {
    "hashtags": [],
    "symbols": [],
    "timestamps": [],
    "urls": [
      {
        "display_url": "openanoymity.ai",
        "expanded_url": "http://openanoymity.ai",
        "indices": [
          1797,
          1820
        ],
        "url": "https://t.co/fO14l5hRkp"
      },
      {
        "display_url": "github.com/OpenAnonymity/…",
        "expanded_url": "https://github.com/OpenAnonymity/nanomem",
        "indices": [
          1990,
          2013
        ],
        "url": "https://t.co/YYDCk5sIzc"
      },
      {
        "display_url": "openanonymity.ai/blog/nanomem/",
        "expanded_url": "https://openanonymity.ai/blog/nanomem/",
        "indices": [
          2022,
          2045
        ],
        "url": "https://t.co/pexZTFdWzz"
      },
      {
        "display_url": "chat.openanonymity.ai",
        "expanded_url": "https://chat.openanonymity.ai",
        "indices": [
          2089,
          2112
        ],
        "url": "https://t.co/rsMjL3wzKQ"
      }
    ],
    "user_mentions": [
      {
        "id_str": "1506453469787471876",
        "indices": [
          2157,
          2170
        ],
        "name": "Amelia Kuang",
        "screen_name": "amelia_kuang"
      },
      {
        "id_str": "1697321160449413120",
        "indices": [
          2171,
          2179
        ],
        "name": "Coco Xu",
        "screen_name": "cocozxu"
      },
      {
        "id_str": "1316315459466088448",
        "indices": [
          2180,
          2188
        ],
        "name": "Erik Chi",
        "screen_name": "erikchi"
      }
    ]
  },
  "quoted_tweet": null,
  "retweeted_tweet": null,
  "isLimitedReply": false,
  "communityInfo": null,
  "article": null
}