🐦 Twitter Post Details

Viewing enriched Twitter post

@_akhaliq

Low-rank Adaptation of Large Language Model Rescoring for Parameter-Efficient Speech Recognition paper page: https://t.co/DiSq9F9AJ7 propose a neural language modeling system based on low-rank adaptation (LoRA) for speech recognition output rescoring. Although pretrained language models (LMs) like BERT have shown superior performance in second-pass rescoring, the high computational cost of scaling up the pretraining stage and adapting the pretrained models to specific domains limit their practical use in rescoring. Here we present a method based on low-rank decomposition to train a rescoring BERT model and adapt it to new domains using only a fraction (0.08%) of the pretrained parameters. These inserted matrices are optimized through a discriminative training objective along with a correlation-based regularization loss. The proposed low-rank adaptation Rescore-BERT (LoRB) architecture is evaluated on LibriSpeech and internal datasets with decreased training times by factors between 5.4 and 3.6.

🔧 Raw API Response

{
  "user": {
    "created_at": "2014-04-27T00:20:12.000Z",
    "default_profile_image": false,
    "description": "AI research paper tweets, ML @Gradio (acq. by @HuggingFace 🤗)\n\ndm for promo",
    "fast_followers_count": 0,
    "favourites_count": 26631,
    "followers_count": 237609,
    "friends_count": 1888,
    "has_custom_timelines": true,
    "is_translator": false,
    "listed_count": 3170,
    "location": "subscribe → ",
    "media_count": 13881,
    "name": "AK",
    "normal_followers_count": 237609,
    "possibly_sensitive": false,
    "profile_banner_url": "https://pbs.twimg.com/profile_banners/2465283662/1610997549",
    "profile_image_url_https": "https://pbs.twimg.com/profile_images/1451191636810092553/kpM5Fe12_normal.jpg",
    "screen_name": "_akhaliq",
    "statuses_count": 21880,
    "translator_type": "none",
    "url": "https://t.co/TbGnXZJwEc",
    "verified": false,
    "withheld_in_countries": [],
    "id_str": "2465283662"
  },
  "id": "1707231999755276580",
  "conversation_id": "1707231999755276580",
  "full_text": "Low-rank Adaptation of Large Language Model Rescoring for Parameter-Efficient Speech Recognition\n\npaper page: https://t.co/DiSq9F9AJ7\n\npropose a neural language modeling system based on low-rank adaptation (LoRA) for speech recognition output rescoring. Although pretrained language models (LMs) like BERT have shown superior performance in second-pass rescoring, the high computational cost of scaling up the pretraining stage and adapting the pretrained models to specific domains limit their practical use in rescoring. Here we present a method based on low-rank decomposition to train a rescoring BERT model and adapt it to new domains using only a fraction (0.08%) of the pretrained parameters. These inserted matrices are optimized through a discriminative training objective along with a correlation-based regularization loss. The proposed low-rank adaptation Rescore-BERT (LoRB) architecture is evaluated on LibriSpeech and internal datasets with decreased training times by factors between 5.4 and 3.6.",
  "reply_count": 1,
  "retweet_count": 4,
  "favorite_count": 42,
  "hashtags": [],
  "symbols": [],
  "user_mentions": [],
  "urls": [
    {
      "url": "https://t.co/qJVEny9tS6",
      "expanded_url": "https://huggingface.co/papers/2309.15223",
      "display_url": "huggingface.co/papers/2309.15…"
    }
  ],
  "media": [
    {
      "media_url": "https://pbs.twimg.com/media/F7FOcEAW8AAKZ7-.jpg",
      "type": "photo"
    }
  ],
  "url": "https://twitter.com/_akhaliq/status/1707231999755276580",
  "created_at": "2023-09-28T03:13:22.000Z",
  "#sort_index": "1707231999755276580",
  "view_count": 12758,
  "quote_count": 1,
  "is_quote_tweet": false,
  "is_retweet": false,
  "is_pinned": false,
  "is_truncated": true,
  "startUrl": "https://twitter.com/_akhaliq/status/1707231999755276580"
}