🐦 Twitter Post Details

Viewing enriched Twitter post

@AndrewYNg

New short course: LLMs as Operating Systems: Agent Memory, created with @Letta_AI, and taught by its founders @charlespacker and @sarahwooders. An LLM's input context window has limited space. Using a longer input context also costs more and results in slower processing. So, managing what's stored in this context window is important. In the innovative paper MemGPT: Towards LLMs as Operating Systems, its authors (which include the instructors) proposed using an LLM agent to manage this context window. Their system uses a large persistent memory that stores everything that could be included in the input context, and an agent decides what is actually included. Take the example of building a chatbot that needs to remember what's been said earlier in a conversation (perhaps over many days of interaction with a user). As the conversation's length grows, the memory management agent will move information from the input context to a persistent searchable database; summarize information to keep relevant facts in the input context; and restore relevant conversation elements from further back in time. This allows a chatbot to keep what's currently most relevant in its input context memory to generate the next response. When I read the original MemGPT paper, I thought it was an innovative technique for handling memory for LLMs. The open-source Letta framework, which we'll use in this course, makes MemGPT easy to implement. It adds memory to your LLM agents and gives them transparent long-term memory. In detail, you’ll learn: - How to build an agent that can edit its own limited input context memory, using tools and multi-step reasoning - What is a memory hierarchy (an idea from computer operating systems, which use a cache to speed up memory access), and how these ideas apply to managing the LLM input context (where the input context window is a "cache" storing the most relevant information; and an agent decides what to move in and out of this to/from a larger persistent storage system) - How to implement multi-agent collaboration by letting different agents share blocks of memory This course will give you a sophisticated understanding of memory management for LLMs, which is important for chatbots having long conversations, and for complex agentic workflows. Please sign up here! https://t.co/XMlBifnwVa

📊 Media Metadata

{
  "score": 1.0,
  "scored_at": "2025-08-09T13:46:07.552124",
  "import_source": "network_archive_import",
  "links_checked": true,
  "checked_at": "2025-08-10T10:32:48.968861",
  "media": [
    {
      "type": "video",
      "url": "https://crmoxkoizveukayfjuyo.supabase.co/storage/v1/object/public/media/posts/1854587401018261962/media_0.mp4?",
      "filename": "media_0.mp4"
    },
    {
      "id": "",
      "type": "video",
      "url": null,
      "media_url": "https://pbs.twimg.com/ext_tw_video_thumb/1854576643475750914/pu/img/ocahDJr8Dan4pEBT.jpg",
      "media_url_https": null,
      "display_url": null,
      "expanded_url": null
    }
  ],
  "reprocessed_at": "2025-08-12T15:26:00.770483",
  "reprocessed_reason": "missing_media_array",
  "original_structure": "had_both"
}

🔧 Raw API Response

{
  "user": {
    "created_at": "2010-11-18T03:39:11.000Z",
    "default_profile_image": false,
    "description": "Co-Founder of Coursera; Stanford CS adjunct faculty. Former head of Baidu AI Group/Google Brain. #ai #machinelearning, #deeplearning #MOOCs",
    "fast_followers_count": 0,
    "favourites_count": 1524,
    "followers_count": 1111093,
    "friends_count": 962,
    "has_custom_timelines": true,
    "is_translator": false,
    "listed_count": 14110,
    "location": "Palo Alto, CA",
    "media_count": 368,
    "name": "Andrew Ng",
    "normal_followers_count": 1111093,
    "possibly_sensitive": false,
    "profile_banner_url": "https://pbs.twimg.com/profile_banners/216939636/1483126470",
    "profile_image_url_https": "https://pbs.twimg.com/profile_images/733174243714682880/oyG30NEH_normal.jpg",
    "screen_name": "AndrewYNg",
    "statuses_count": 1799,
    "translator_type": "none",
    "url": "https://t.co/XidcMETENd",
    "verified": true,
    "withheld_in_countries": [],
    "id_str": "216939636"
  },
  "id": "1854587401018261962",
  "conversation_id": "1854587401018261962",
  "full_text": "New short course: LLMs as Operating Systems: Agent Memory, created with @Letta_AI, and taught by its founders @charlespacker and @sarahwooders.\n\nAn LLM's input context window has limited space. Using a longer input context also costs more and results in slower processing. So, managing what's stored in this context window is important.\n\nIn the innovative paper MemGPT: Towards LLMs as Operating Systems, its authors (which include the instructors) proposed using an LLM agent to manage this context window. Their system uses a large persistent memory that stores everything that could be included in the input context, and  an agent decides   what is actually included.\n\nTake the example of building a chatbot that needs to remember what's been said earlier in a conversation (perhaps over many days of interaction with a user). As the conversation's length grows, the memory management agent will move information from the input context to a persistent searchable database; summarize information to keep relevant facts in the input context; and restore relevant conversation elements from further back in time. This allows a chatbot to keep what's currently most relevant in its input context memory to generate the next response.\n\nWhen I read the original MemGPT paper, I thought it was an innovative technique for handling memory for LLMs. The open-source Letta framework, which we'll use in this course, makes MemGPT easy to implement. It adds memory to your LLM agents and gives them transparent long-term memory.\n\nIn detail, you’ll learn:\n- How to build an agent that can edit its own limited input context memory, using tools and multi-step reasoning\n- What is a memory hierarchy (an idea from computer operating systems, which use a cache to speed up memory access), and how these ideas apply to managing the LLM input context (where the input context window is a \"cache\" storing the most relevant information; and an agent decides what to move in and out of this to/from a larger persistent storage system)\n- How to implement multi-agent collaboration by letting different agents share blocks of memory\n\nThis course will give you a sophisticated understanding of memory management for LLMs, which is important for chatbots having long conversations, and for complex agentic workflows.\n\nPlease sign up here!  https://t.co/XMlBifnwVa",
  "reply_count": 99,
  "retweet_count": 345,
  "favorite_count": 2013,
  "hashtags": [],
  "symbols": [],
  "user_mentions": [
    {
      "id_str": "1821252546469752832",
      "name": "Letta",
      "screen_name": "Letta_AI",
      "profile": "https://twitter.com/Letta_AI"
    },
    {
      "id_str": "2385913832",
      "name": "Charles Packer",
      "screen_name": "charlespacker",
      "profile": "https://twitter.com/charlespacker"
    },
    {
      "id_str": "144333614",
      "name": "Sarah Wooders 👾",
      "screen_name": "sarahwooders",
      "profile": "https://twitter.com/sarahwooders"
    }
  ],
  "urls": [],
  "media": [
    {
      "media_url": "https://pbs.twimg.com/ext_tw_video_thumb/1854576643475750914/pu/img/ocahDJr8Dan4pEBT.jpg",
      "type": "video",
      "video_url": "https://video.twimg.com/ext_tw_video/1854576643475750914/pu/vid/avc1/1280x720/fTsWZYLbajngly0I.mp4?tag=12"
    }
  ],
  "url": "https://twitter.com/AndrewYNg/status/1854587401018261962",
  "created_at": "2024-11-07T18:11:07.000Z",
  "#sort_index": "1854587401018261962",
  "view_count": 179331,
  "quote_count": 30,
  "is_quote_tweet": false,
  "is_retweet": false,
  "is_pinned": false,
  "is_truncated": true,
  "startUrl": "https://x.com/andrewyng/status/1854587401018261962"
}