🐦 Twitter Post Details

Viewing enriched Twitter post

@SchmidhuberAI

1st and 34th author of @GoogleDeepMind's paper [1] each got 1/4 Nobel Prize for protein structure prediction through Alphafold. Who invented that? (Disclaimer: a student from my lab co-founded DeepMind.) The 2021 paper [1] failed to cite important prior work [2] by Baldi and Pollastri (2002): at a time when compute was roughly ten thousand times more expensive than in 2021, [2] introduced a pipeline very similar to the one of Alphafold 2, using multiple sequence alignment (MSA) to predict the secondary protein structure with the help of a position-specific scoring matrix (PSSM) or a profile matrix, going beyond even earlier work of 1988 [5][6][10]. The extra step (absent in Alphafold 2) was to predict the protein's topology, too. See also the follow-up work of 2012 [3]. [1] didn't cite @HochreiterSepp et al.'s first successful application [7] of deep learning to protein folding (2007, using LSTM instead of MSA to construct a profile). [1] also failed to cite the essential prior work by Golkov et al (2016) [4][8], which had crucial aspects of AlphaFold: (1) identify homologous sequences in a database of proteins with known structure, (2) compute the co-evolution statistics using the homologous sequences, (3) train a graph NN to predict the protein contact map (that determines its 3D structure) directly from the co-evolution statistics, (4) demonstrate experimentally a significant boost in performance on the CASP dataset [4][9]. See the attached image! Instead of the contact map, DeepMind (2021) predicted the distance map, and instead of graph CNNs, they used the quadratic Transformer published in 2017 (the unnormalized linear Transformer had existed since 1991 [11]). DeepMind also used more training data and much more compute for hyperparameter tuning etc. Image credits: [4][8] REFERENCES [1] J. Jumper, R. Evans, A. Pritzel, T. Green, M. Figurnov, O. Ronneberger, K. Tunyasuvunakool, R. Bates, A. Zidek, A. Potapenko, A. Bridgland, C. Meyer, S. A. A. Kohl, A. J. Ballard, A. Cowie, B. Romera-Paredes, S. Nikolov, R. Jain, J. Adler, T. Back, S. Petersen, D. Reiman, E. Clancy, M. Zielinski, M. Steinegger, M. Pacholska, T. Berghammer, S. Bodenstein, D. Silver, O. Vinyals, A. W. Senior, K. Kavukcuoglu, P. Kohli & D. Hassabis. Highly accurate protein structure prediction with AlphaFold. Nature 596, 583-589, 2021. [2] P. Baldi, G. Pollastri. A machine learning strategy for protein analysis. IEEE Intelligent Systems 17.2 (2002): 28-35. [3] P. Di Lena, K. Nagata, and P. Baldi. Deep Architectures for Protein Contact Map Prediction. Bioinformatics, 28, 2449-2457, (2012). [4] V. Golkov, M. J. Skwark, A. Golkov, A. Dosovitskiy, T. Brox, J. Meiler, D. Cremers (2016). Protein contact prediction from amino acid co-evolution using convolutional networks for graph-valued images. NeurIPS, Barcelona, 2016. [5] N. Qian and T.J. Sejnowski (1988). Predicting the secondary structure of globular proteins using neural network models. J. Mol. Biol. 1988, 202, 865-884. [6] H. Bohr, J. Bohr, S. Brunak, R.M.J. Cotterill, B. Lautrup, L. Norskov, O.H. Olsen, S.B. Petersen (1988). Protein secondary structure and homology by neural networks. The α-helices in rhodopsin. FEBS Lett. 1988, 241, 223-228. [7] S. Hochreiter, M. Heusel, K. Obermayer. Fast model-based protein homology detection without alignment. Bioinformatics 23(14):1728-36, 2007. Successful application of deep learning to protein folding problems, through an LSTM that was orders of magnitude faster than competing methods. [8] D. Cremers (July 2025). LinkedIn post on the Nobel Prize for AlphaFold. [9] A Nobel Prize for Plagiarism. Technical Report IDSIA-24-24, 2024 (updated 2025) https://t.co/u9YxfBuqNf . Popular tweets on this: https://t.co/heYSuPQDxp https://t.co/QQU9FKpqAh [10] The Nobel Committee for Chemistry (2024). Scientific Background to the Nobel Prize in Chemistry 2024. [11] Annotated History of Modern AI and Deep Learning. Technical Report IDSIA-22-22, IDSIA, Switzerland, 2022 (updated 2025). Preprint https://t.co/YZrEphq1qx. This extends the 2015 award-winning deep learning survey in the journal "Neural Networks."

Media 1

📊 Media Metadata

{
  "media": [
    {
      "url": "https://crmoxkoizveukayfjuyo.supabase.co/storage/v1/object/public/media/posts/2011830777101574264/media_0.jpg?",
      "media_url": "https://crmoxkoizveukayfjuyo.supabase.co/storage/v1/object/public/media/posts/2011830777101574264/media_0.jpg?",
      "type": "photo",
      "filename": "media_0.jpg"
    }
  ],
  "processed_at": "2026-01-18T19:42:24.683302",
  "pipeline_version": "2.0"
}

🔧 Raw API Response

{
  "type": "tweet",
  "id": "2011830777101574264",
  "url": "https://x.com/SchmidhuberAI/status/2011830777101574264",
  "twitterUrl": "https://twitter.com/SchmidhuberAI/status/2011830777101574264",
  "text": "1st and 34th author of @GoogleDeepMind's paper [1] each got 1/4 Nobel Prize for protein structure prediction through Alphafold. Who invented that? (Disclaimer: a student from my lab co-founded DeepMind.)\n\nThe 2021 paper [1] failed to cite important prior work [2] by Baldi and Pollastri (2002): at a time when compute was roughly ten thousand times more expensive than in 2021, [2] introduced a pipeline very similar to the one of Alphafold 2, using multiple sequence alignment (MSA) to predict the secondary protein structure with the help of a position-specific scoring matrix (PSSM) or a profile matrix, going beyond even earlier work of 1988 [5][6][10]. The extra step (absent in Alphafold 2) was to predict the protein's topology, too. See also the follow-up work of 2012 [3]. \n\n[1] didn't cite @HochreiterSepp et al.'s first successful application [7] of deep learning to protein folding (2007, using LSTM instead of MSA to construct a profile).\n\n[1] also failed to cite the essential prior work by Golkov et al (2016) [4][8], which had crucial aspects of AlphaFold: (1) identify homologous sequences in a database of proteins with known structure, (2) compute the co-evolution statistics using the homologous sequences, (3) train a graph NN to predict the protein contact map (that determines its 3D structure) directly from the co-evolution statistics, (4) demonstrate experimentally a significant boost in performance on the CASP dataset [4][9]. See the attached image!\n\nInstead of the contact map, DeepMind (2021) predicted the distance map, and instead of graph CNNs, they used the quadratic Transformer published in 2017 (the unnormalized linear Transformer had existed since 1991 [11]). DeepMind also used more training data and much more compute for hyperparameter tuning etc. \n\nImage credits: [4][8] \n\nREFERENCES\n\n[1] J. Jumper, R. Evans, A. Pritzel, T. Green, M. Figurnov, O. Ronneberger, K. Tunyasuvunakool, R. Bates, A. Zidek, A. Potapenko, A. Bridgland, C. Meyer, S. A. A. Kohl, A. J. Ballard, A. Cowie, B. Romera-Paredes, S. Nikolov, R. Jain, J. Adler, T. Back, S. Petersen, D. Reiman, E. Clancy, M. Zielinski, M. Steinegger, M. Pacholska, T. Berghammer, S. Bodenstein, D. Silver, O. Vinyals, A. W. Senior, K. Kavukcuoglu, P. Kohli & D. Hassabis. Highly accurate protein structure prediction with AlphaFold. Nature 596, 583-589, 2021.\n\n[2] P. Baldi, G. Pollastri. A machine learning strategy for protein analysis. IEEE Intelligent Systems 17.2 (2002): 28-35.\n\n[3] P. Di Lena, K. Nagata, and P. Baldi. Deep Architectures for Protein Contact Map Prediction. Bioinformatics, 28, 2449-2457, (2012).\n\n[4] V. Golkov, M. J. Skwark, A. Golkov, A. Dosovitskiy, T. Brox, J. Meiler, D. Cremers (2016). Protein contact prediction from amino acid co-evolution using convolutional networks for graph-valued images. NeurIPS, Barcelona, 2016.\n\n[5] N. Qian and T.J. Sejnowski (1988). Predicting the secondary structure of globular proteins using neural network models. J. Mol. Biol. 1988, 202, 865-884.\n\n[6] H. Bohr, J. Bohr, S. Brunak, R.M.J. Cotterill, B. Lautrup, L. Norskov, O.H. Olsen, S.B. Petersen (1988). Protein secondary structure and homology by neural networks. The α-helices in rhodopsin. FEBS Lett. 1988, 241, 223-228.\n\n[7] S. Hochreiter, M. Heusel, K. Obermayer. Fast model-based protein homology detection without alignment. Bioinformatics 23(14):1728-36, 2007. Successful application of deep learning to protein folding problems, through an LSTM that was orders of magnitude faster than competing methods.\n\n[8] D. Cremers (July 2025). LinkedIn post on the Nobel Prize for AlphaFold. \n\n[9] A Nobel Prize for Plagiarism. Technical Report IDSIA-24-24, 2024 (updated 2025) https://t.co/u9YxfBuqNf . Popular tweets on this:\nhttps://t.co/heYSuPQDxp\nhttps://t.co/QQU9FKpqAh\n\n[10] The Nobel Committee for Chemistry (2024). Scientific Background to the Nobel Prize in Chemistry 2024.\n\n[11] Annotated History of Modern AI and Deep Learning. Technical Report IDSIA-22-22, IDSIA, Switzerland, 2022 (updated 2025). Preprint https://t.co/YZrEphq1qx. This extends the 2015 award-winning deep learning survey in the journal \"Neural Networks.\"",
  "source": "Twitter for iPhone",
  "retweetCount": 38,
  "replyCount": 27,
  "likeCount": 338,
  "quoteCount": 6,
  "viewCount": 122522,
  "createdAt": "Thu Jan 15 16:00:09 +0000 2026",
  "lang": "en",
  "bookmarkCount": 203,
  "isReply": false,
  "inReplyToId": null,
  "conversationId": "2011830777101574264",
  "displayTextRange": [
    0,
    301
  ],
  "inReplyToUserId": null,
  "inReplyToUsername": null,
  "author": {
    "type": "user",
    "userName": "SchmidhuberAI",
    "url": "https://x.com/SchmidhuberAI",
    "twitterUrl": "https://twitter.com/SchmidhuberAI",
    "id": "1163786515144724485",
    "name": "Jürgen Schmidhuber",
    "isVerified": false,
    "isBlueVerified": true,
    "verifiedType": null,
    "profilePicture": "https://pbs.twimg.com/profile_images/1715797038535680000/ZFrYnYWD_normal.jpg",
    "coverPicture": "https://pbs.twimg.com/profile_banners/1163786515144724485/1721394141",
    "description": "",
    "location": "Switzerland, KSA",
    "followers": 179337,
    "following": 0,
    "status": "",
    "canDm": false,
    "canMediaTag": true,
    "createdAt": "Tue Aug 20 12:15:46 +0000 2019",
    "entities": {
      "description": {
        "urls": []
      },
      "url": {}
    },
    "fastFollowersCount": 0,
    "favouritesCount": 1407,
    "hasCustomTimelines": true,
    "isTranslator": false,
    "mediaCount": 82,
    "statusesCount": 210,
    "withheldInCountries": [],
    "affiliatesHighlightedLabel": {},
    "possiblySensitive": false,
    "pinnedTweetIds": [
      "1884632429412921664"
    ],
    "profile_bio": {
      "description": "Invented principles of meta-learning (1987), GANs (1990), Transformers (1991), very deep learning (1991), etc. Our AI is used many billions of times every day.",
      "entities": {
        "description": {},
        "url": {
          "urls": [
            {
              "display_url": "people.idsia.ch/~juergen/most-…",
              "expanded_url": "https://people.idsia.ch/~juergen/most-cited-neural-nets.html",
              "indices": [
                0,
                23
              ],
              "url": "https://t.co/16khDh3Vwn"
            }
          ]
        }
      }
    },
    "isAutomated": false,
    "automatedBy": null
  },
  "extendedEntities": {
    "media": [
      {
        "allow_download_status": {
          "allow_download": true
        },
        "display_url": "pic.twitter.com/COIYqg0y8w",
        "expanded_url": "https://twitter.com/SchmidhuberAI/status/2011830777101574264/photo/1",
        "ext_media_availability": {
          "status": "Available"
        },
        "features": {
          "large": {},
          "orig": {}
        },
        "id_str": "2011829132104253441",
        "indices": [
          302,
          325
        ],
        "media_key": "3_2011829132104253441",
        "media_results": {
          "id": "QXBpTWVkaWFSZXN1bHRzOgwAAQoAARvrc++/20ABCgACG+t1bsFWYHgAAA==",
          "result": {
            "__typename": "ApiMedia",
            "id": "QXBpTWVkaWE6DAABCgABG+tz77/bQAEKAAIb63VuwVZgeAAA",
            "media_key": "3_2011829132104253441"
          }
        },
        "media_url_https": "https://pbs.twimg.com/media/G-tz77_bQAEl3sp.jpg",
        "original_info": {
          "focus_rects": [
            {
              "h": 1174,
              "w": 2096,
              "x": 0,
              "y": 0
            },
            {
              "h": 1548,
              "w": 1548,
              "x": 326,
              "y": 0
            },
            {
              "h": 1548,
              "w": 1358,
              "x": 421,
              "y": 0
            },
            {
              "h": 1548,
              "w": 774,
              "x": 713,
              "y": 0
            },
            {
              "h": 1548,
              "w": 2096,
              "x": 0,
              "y": 0
            }
          ],
          "height": 1548,
          "width": 2096
        },
        "sizes": {
          "large": {
            "h": 1513,
            "w": 2048
          }
        },
        "type": "photo",
        "url": "https://t.co/COIYqg0y8w"
      }
    ]
  },
  "card": null,
  "place": {},
  "entities": {
    "urls": [
      {
        "display_url": "people.idsia.ch/~juergen/physi…",
        "expanded_url": "https://people.idsia.ch/~juergen/physics-nobel-2024-plagiarism.html",
        "indices": [
          3689,
          3712
        ],
        "url": "https://t.co/u9YxfBuqNf"
      },
      {
        "display_url": "x.com/SchmidhuberAI/…",
        "expanded_url": "https://x.com/SchmidhuberAI/status/1844022724328394780",
        "indices": [
          3739,
          3762
        ],
        "url": "https://t.co/heYSuPQDxp"
      },
      {
        "display_url": "x.com/SchmidhuberAI/…",
        "expanded_url": "https://x.com/SchmidhuberAI/status/1865310820856393929",
        "indices": [
          3763,
          3786
        ],
        "url": "https://t.co/QQU9FKpqAh"
      },
      {
        "display_url": "arxiv.org/abs/2212.11279",
        "expanded_url": "https://arxiv.org/abs/2212.11279",
        "indices": [
          4031,
          4054
        ],
        "url": "https://t.co/YZrEphq1qx"
      }
    ],
    "user_mentions": [
      {
        "id_str": "4783690002",
        "indices": [
          23,
          38
        ],
        "name": "Google DeepMind",
        "screen_name": "GoogleDeepMind"
      },
      {
        "id_str": "1463119548115087362",
        "indices": [
          800,
          815
        ],
        "name": "Sepp Hochreiter",
        "screen_name": "HochreiterSepp"
      }
    ]
  },
  "quoted_tweet": null,
  "retweeted_tweet": null,
  "isLimitedReply": false,
  "article": null
}