Your curated collection of saved posts and media

Showing 32 posts ยท last 14 days ยท by score
W
William Lamkin
@WilliamLamkin
๐Ÿ“…
Aug 30, 2023
981d ago
๐Ÿ†”09424347

New ModelScope Image2Video demo shared by @fffiloni https://t.co/tUzswIT9cs

@fffiloni โ€ข

A new ModelScope Image2Video is out on @huggingface ๐Ÿค— we love it ! It generates a short video from init image while keeping style consistency and try to restitute the general composition idea from source Share your results with the Community ๐Ÿ˜Œ๐Ÿคฉ โ€” ๐Ÿ‘‰ https://t.co/AVqVAMqaCi htt

โค๏ธ34
likes
๐Ÿ”13
retweets
๐Ÿ–ผ๏ธ Media
L
Leandro von Werra
@lvwerra
๐Ÿ“…
Aug 30, 2023
981d ago
๐Ÿ†”03615168

Introducing TextEnvironments in TRL 0.7.0! https://t.co/SuGrdSaMZh With TextEnvironments you can teach your language models to use tools to solve tasks more reliably. We trained models to use Wiki search and Python to answer trivia and math questions! Let's have a look how๐Ÿงต https://t.co/2ZuvBQJJsa

Media 1
โค๏ธ152
likes
๐Ÿ”32
retweets
๐Ÿ–ผ๏ธ Media
E
Explosion ๐Ÿ’ฅ
@explosion_ai
๐Ÿ“…
Aug 29, 2023
982d ago
๐Ÿ†”83549192

Last week we introduced โœจ Prodigy v1.13.1 โœจwith support for the new "model as annotator" recipes. To show off this new feature, @fishnets88 recorded a demo that shows how it may help prioritizing examples to annotate first. https://t.co/XFoAMnWUzE https://t.co/OcwiQpzyVK

Media 1
โค๏ธ18
likes
๐Ÿ”5
retweets
๐Ÿ–ผ๏ธ Media
L
LlamaIndex ๐Ÿฆ™
@llama_index
๐Ÿ“…
Aug 30, 2023
981d ago
๐Ÿ†”51899671

We introduce a new managed index abstraction that abstracts away the ingestion and storage steps of RAG within a managed service. Excited to introduce our first integration with @vectara, courtesy of @ofermend. Check out our full guide here: https://t.co/Pxqk0wdMvn https://t.co/3Y5lRthfV6

Media 1
โค๏ธ50
likes
๐Ÿ”17
retweets
๐Ÿ–ผ๏ธ Media
S
Susan Zhang
@suchenzang
๐Ÿ“…
Aug 30, 2023
981d ago
๐Ÿ†”43240286

So... who's going to explain MMLU to these folks? https://t.co/bbkFzoRcD7

@norabelrose โ€ข

I'm opposed to any AI regulation based on absolute capability thresholds, as opposed to indexing to some fraction of state-of-the-art capabilities. The Center for AI Policy is proposing thresholds which already include open source Llama 2 (7B). This is ridiculous.

Media 1Media 2
+1 more
โค๏ธ154
likes
๐Ÿ”8
retweets
๐Ÿ–ผ๏ธ Media
S
Sebastian Ruder
@seb_ruder
๐Ÿ“…
Aug 30, 2023
981d ago
๐Ÿ†”74652560

๐Ÿšจ NLP News ๐Ÿ›  Tool-Augmented LLMs https://t.co/MnqyD3CGfx https://t.co/P6zYj4MTZf

Media 1
โค๏ธ19
likes
๐Ÿ”5
retweets
๐Ÿ–ผ๏ธ Media
K
Krrish
@krrish_dh
๐Ÿ“…
Aug 30, 2023
981d ago
๐Ÿ†”26823396

5๏ธโƒฃ new things @LiteLLM ๐Ÿ”ฅ New @Replit 1-click deploy template ๐Ÿ™‹โ€โ™‚๏ธNew Error type - Context Window Exceptions โœŒ๏ธ Exception mapping support for @togethercompute @AI21Labs @OpenRouterAI ๐Ÿฆ™ New CodeLlama Model Page - add to your app in 1 click ๐Ÿ› ๏ธ Set @VertexAI credentials via .env https://t.co/0Vm2RbxhWc

Media 1
โค๏ธ9
likes
๐Ÿ”2
retweets
๐Ÿ–ผ๏ธ Media
B
๐Ÿ”ŽJulia Evans๐Ÿ”
@b0rk
๐Ÿ“…
Aug 29, 2023
982d ago
๐Ÿ†”51979340
โญ0.74

some people who make programming easier (who am I missing?) https://t.co/qp1xayvJw9

Media 1
โค๏ธ4,754
likes
๐Ÿ”946
retweets
๐Ÿ–ผ๏ธ Media
O
elvis
@omarsar0
๐Ÿ“…
Aug 29, 2023
982d ago
๐Ÿ†”12689654
โญ1.00

Another interesting short study. Finds that "Llama-2-70b is almost as strong at factuality as gpt-4, and considerably better than gpt-3.5-turbo." Need to take a closer at how evaluation is done but I already starting to see strong experimental results on Llama 2 for all kinds of summarization problems. Super exciting stuff! https://t.co/LWlC4B5Gat

Media 1
โค๏ธ568
likes
๐Ÿ”93
retweets
๐Ÿ–ผ๏ธ Media
A
anton
@abacaj
๐Ÿ“…
Aug 29, 2023
982d ago
๐Ÿ†”35473880

Full fine tuning of 15B parameter LLMs on poor GPUs. Slow? who cares. Possible? of course it is https://t.co/uyjTOoBGN5

Media 1Media 2
โค๏ธ497
likes
๐Ÿ”32
retweets
๐Ÿ–ผ๏ธ Media
C
Crรฉmieux
@cremieuxrecueil
๐Ÿ“…
Aug 29, 2023
982d ago
๐Ÿ†”08281606

Wow, this study is devastating for cynicism. Here's a TL;DR: In studies 1โ€“3, participants indicated they thought cynics would do better on cognitive tasks. In studies 4โ€“5, cynics were tested and 1 SD of cynicism was associated with 0.25 and 0.17 SDs lower cognitive ability in studies 4 and 5, respectively. In study 6, cynics were found to be - less educated in 29/30 countries - less literate in 28/30 countries - less numerate in 29/30 countries - less computer-literate in 23/26 countries Cynicism is simply not smart. Source: https://t.co/fk77cy1TV3

@emollick โ€ข

A worldwide survey of 200k people finds cynical people are thought of as smarter... but that, in reality, cynics test lower on cognitive & competency tests. As Stephen Colbert said: โ€œCynicism masquerades as wisdom, but it is the furthest thing from it.โ€ https://t.co/KLc33j4J

Media 1Media 2
+1 more
โค๏ธ2,094
likes
๐Ÿ”390
retweets
๐Ÿ–ผ๏ธ Media
B
Brian Roemmele
@BrianRoemmele
๐Ÿ“…
Aug 29, 2023
982d ago
๐Ÿ†”41942976

๐Ÿ”ฎ Real-time on demand generated text-to-video training and eduction videos completely flattened the memory requirements for understanding anything. In 1TB memory and a local non-cloud private AI few needed YouTube or even TikTok. No one understood in 2023โ€”it was clear in 2025. https://t.co/McjZhDYRLc

Media 1
โค๏ธ68
likes
๐Ÿ”18
retweets
๐Ÿ–ผ๏ธ Media
S
Sahil Bloom
@SahilBloom
๐Ÿ“…
Aug 29, 2023
982d ago
๐Ÿ†”32199487

If you want to master any craft, read this: The 4 Stages of Competence model was created by Matthew Broadwell in 1969. It says we progress through stages when moving from total novice to expert at a given craft. The stages are as follows: 1. Unconscious Incompetence At this stage, you're a total novice and don't even know what you don't know. You lack competence and don't have an understanding of your own incompetence. 2. Conscious Incompetence Here, you've become aware of your own incompetence, but you haven't addressed it yet. You know that there's a gap in your skills that needs to be filled. 3. Conscious Competence At this stage, you've developed a level of competence at your craft, but it requires conscious effort and focus. You can do it, but it takes work. 4. Unconscious Competence This is the pinnacle of expertise, where you have extreme competence and can execute without conscious effort. Few people ever reach this stage. I visualize it most clearly as a hierarchy, with progress marked by a graduation up the pyramid from one stage to the next. This model is useful as a reflection tool for providing clarity about where we sit on a given skill or craft at any given moment. We tend to overestimate our own competency levels, so having a clear framework is helpful for cutting through the noise and delivering an honest personal assessment. To determine whether you've graduated from one stage to the next, here are some simple questions to ask: Stage 1 to Stage 2: โ€ข Am I aware of how bad I am at [X]? โ€ข Am I aware of what is required to learn and develop at [X]? Stage 2 to Stage 3: โ€ข Can I do [X] at a consistently average level? โ€ข Have I avoided "rookie mistakes" the last 10 times I have done [X]? Stage 3 to Stage 4: โ€ข Can I do [X] at a top-1% level with my eyes closed? โ€ข Do people tell me that I look effortless when doing [X]? Most of us will spend our lives in Stage 3, where we can create results with effort. But to reach Stage 4, we need to engage in deep, deliberate, focused practice. Our brains have myelin, a fatty tissue that insulates our neurons and greases them for proper firing. Stage 4 is where countless hours of effortful practice result in more myelin, allowing us to execute with ease. Stage 4 is the level of Sprezzaturaโ€”studied nonchalance, earned effortlessness. It's a state we can aspire to, but few will achieve across more than 1-2 areas in our lives (at best). As you progress in any new endeavor or craft, use the 4 Stages of Competence to reflect on your growth. If you enjoyed this or learned something, follow me @SahilBloom for more in future!

Media 1
โค๏ธ3,442
likes
๐Ÿ”726
retweets
๐Ÿ–ผ๏ธ Media
B
Blockade Labs
@BlockadeLabs
๐Ÿ“…
Aug 29, 2023
982d ago
๐Ÿ†”20259833

Just dropped Style Pack 2: Psychedelic ๐ŸŒˆ These 4 mind-bending 360ยฐ styles break the mold for what you expect out of a skybox. Wildly imaginative, artistic, & unexpected, these styles are amazingly fun to play with! Get creating at https://t.co/GHiX51beXA #skyboxai #trippyart https://t.co/Uzy3AuxlPz

โค๏ธ85
likes
๐Ÿ”17
retweets
๐Ÿ–ผ๏ธ Media
K
Kushal Tirumala
@kushal_tirumala
๐Ÿ“…
Aug 29, 2023
982d ago
๐Ÿ†”34273927

Excited to release our work in data selection for LLM pre-training! We introduce a new data selection method for large-scale web data (D4) which gets ~20% efficiency gains & +2% downstream acc @ 6.7B scale over the current standard of randomly sampling Minhash deduped web docs https://t.co/imH9K5rSfx

Media 1
โค๏ธ147
likes
๐Ÿ”29
retweets
๐Ÿ–ผ๏ธ Media
A
Guido Appenzeller
@appenz
๐Ÿ“…
Aug 29, 2023
982d ago
๐Ÿ†”52083570

Asked GPT-4 to remove unnecessary boilerplate from a message from my bank (@Chase), and it reduced its length by 85%. This tells us a lot about the power of AI. And about Chase. https://t.co/KofxlZtwxh

Media 1
โค๏ธ28
likes
๐Ÿ”5
retweets
๐Ÿ–ผ๏ธ Media
L
LlamaIndex ๐Ÿฆ™
@llama_index
๐Ÿ“…
Aug 29, 2023
982d ago
๐Ÿ†”39966070

We now have embedding finetuning: optimize your retrieval performance๐Ÿ”ฅ The underrated piece: retrieval evals are a big pain point in building RAG. We have auto QA dataset generation capabilities from text that you can use for finetuning AND evals. ๐Ÿ“—: https://t.co/KRO8taBgt6 https://t.co/tP9nYzgZ2q

Media 1
โค๏ธ145
likes
๐Ÿ”27
retweets
๐Ÿ–ผ๏ธ Media
O
elvis
@omarsar0
๐Ÿ“…
Aug 29, 2023
982d ago
๐Ÿ†”04200278

As an LLM practitioner, it's surreal to witness the vast range of applications LLMs are powering. I am observing the high use of LLMs to extract actionable insights from internal knowledge bases and all kinds of data sources. Common enterprise tasks include custom chatbots and complex summarization capabilities to support advanced research assistants and analytics. I am amazed that we can build these powerful LLM-powered solutions today without touching a line of code. If you are interested in this space, I highly recommend you check out this upcoming workshop: https://t.co/v19zmTYmyE It will cover: - Use custom data transformers, vector stores, and custom user-code modules - Call out to any LLM (GPT-4 or fine-tuned by Abacus) - Chain data, user-code, and LLM modules together - Deploy into production and monitor usage

Media 1
โค๏ธ134
likes
๐Ÿ”18
retweets
๐Ÿ–ผ๏ธ Media
K
Robert Lukoshko
@Karmedge
๐Ÿ“…
Aug 29, 2023
982d ago
๐Ÿ†”67548724

I guess we can consider millions of startups with meeting summarisation dead @Google just dropped a Duet tool ~summarize ~search ~get action items ... inside your google suit docs turns out @Scobleizer was right after all?

Media 1
โค๏ธ14
likes
๐Ÿ”4
retweets
๐Ÿ–ผ๏ธ Media
P
Perplexity
@perplexity_ai
๐Ÿ“…
Aug 29, 2023
982d ago
๐Ÿ†”88345449

Introducing Claude-2 by @AnthropicAI, now on Perplexity Pro. Update your settings now: https://t.co/QM2YqxTpxQ Here's what Claude-2 brings to you: ๐Ÿ—‚๏ธ Deeper Research: Longer context and larger files up to 25MB. โœ๏ธ Better Writing: More natural and readable content. โšก๏ธ Quick Answers: Human-like responses with Quick Search and Copilot. Enabling Claude-2 on Perplexity is another step towards offering our users the world's best research assistant: providing access to Anthropic's models designed for maximum utility and safety. Try it out by activating Claude-2 in your Perplexity Pro settings now.

โค๏ธ544
likes
๐Ÿ”85
retweets
๐Ÿ–ผ๏ธ Media
H
Haroon Choudery
@haroonchoudery
๐Ÿ“…
Aug 29, 2023
982d ago
๐Ÿ†”52148254

๐Ÿ“ฃ Autoblocks is now available to everyone! We built Autoblocks to help teams *improve & differentiate* their LLM products. If youโ€™re building with LLMs, here's how Autoblocks can be your secret weapon: https://t.co/TOmOZ1rpjK

โค๏ธ41
likes
๐Ÿ”9
retweets
๐Ÿ–ผ๏ธ Media
H
Hamel Husain
@HamelHusain
๐Ÿ“…
Aug 28, 2023
983d ago
๐Ÿ†”24962549
โญ1.00

This is Huge! ๐Ÿฅณ Half of the conversations around LLMs are โ€œWill this fit on my GPU[s]?โ€ And this calculator makes it much easier. There are some important nuances for LoRA and I took notes on that here: https://t.co/Bct28Wwhea https://t.co/MilXje5IVB

@TheZachMueller โ€ข

Excited to announce a new @huggingface space to help with one of machine learning's biggest questions: How much space does {X} model take in vRAM? And most importantly: when using `device_map="auto"` https://t.co/kmQCuPeAdM https://t.co/b2xQRreucD

Media 1Media 2
โค๏ธ397
likes
๐Ÿ”69
retweets
๐Ÿ–ผ๏ธ Media
H
Hard Kothari
@HardKothari
๐Ÿ“…
Aug 28, 2023
983d ago
๐Ÿ†”97651628

Are you using @LangChainAI but it is difficult to Debug? Not anymore with LangSmith It makes tracing each LLM call very easy and intiutive. It's like looking under the hood of system. After getting beta access, I explored it over last week & below are my ๐Ÿ”‘ take aways: ๐Ÿงต

Media 1
โค๏ธ57
likes
๐Ÿ”11
retweets
๐Ÿ–ผ๏ธ Media
S
Suzana Iliฤ‡
@suzatweet
๐Ÿ“…
Aug 28, 2023
983d ago
๐Ÿ†”38032958

From @seb_ruderโ€™s NLP News - Exploring tool use Language models โ€œare limited to producing natural language, which does not allow them to interact with the real world. This can be ameliorated by allowing the model to access external toolsโ€”by predicting special tokens or commands. [..] a tool can be an arbitrary API.โ€ https://t.co/f07q86rkny

Media 1
โค๏ธ15
likes
๐Ÿ”3
retweets
๐Ÿ–ผ๏ธ Media
R
Vaibhav (VB) Srivastav
@reach_vb
๐Ÿ“…
Aug 28, 2023
983d ago
๐Ÿ†”94130888

Want blazingly fast TTS with bark ๐Ÿถ? Good news: we made it ~2X fast โšก๏ธ Powered by ๐Ÿค— Transformers & Optimum! ๐Ÿ‘‡

Media 1
โค๏ธ72
likes
๐Ÿ”19
retweets
๐Ÿ–ผ๏ธ Media
L
Alex Lebrun
@lxbrun
๐Ÿ“…
Aug 28, 2023
983d ago
๐Ÿ†”63635614

From physician to administrative assistant to physician again. https://t.co/zHDtUTtktF

Media 1
โค๏ธ28
likes
๐Ÿ”4
retweets
๐Ÿ–ผ๏ธ Media
N
Nutanix Inc.
@nutanix
๐Ÿ“…
Aug 15, 2023
996d ago
๐Ÿ†”69993472

[Breaking News] #GenerativeAI adoption just got a lot simpler! New Nutanix GPT-in-a-Box Solution enables customers to jump-start #AI innovation while maintaining full control over their data. Details: https://t.co/z63XRuJ1W2 https://t.co/9Qqb4caQMX

Media 1
โค๏ธ71
likes
๐Ÿ”23
retweets
๐Ÿ–ผ๏ธ Media
W
WizardLM
@WizardLM_AI
๐Ÿ“…
Aug 28, 2023
983d ago
๐Ÿ†”24356878

๐Ÿ’ฅ๐Ÿ’ฅ Wow ! We are happy to know that Phind-CodeLlama-34B is also a "WizardCoder".โ˜บ๏ธ As @phindsearch claimed, they used a WizardCoder-style dataset to train their V1 model. And they may continue train WizardCoder to get their V2 model. But there is no need to delete the comments or even the whole repo ... we fully welcome to use our method or model to enhance your LLMs and let's enjoy the benefits of the AI revolution together! โค๏ธ

Media 1Media 2
โค๏ธ134
likes
๐Ÿ”15
retweets
๐Ÿ–ผ๏ธ Media
N
Nick Dobos
@NickADobos
๐Ÿ“…
Aug 28, 2023
983d ago
๐Ÿ†”94746293

Oh shit, new github co-pilot updates! They stole cursor's UI!! Now includes the terminal in context! And 8k context window! Only in Visual Studio though... (i think???) Not VScode :( https://t.co/arznYNWiA3

Media 1Media 2
+2 more
โค๏ธ81
likes
๐Ÿ”8
retweets
๐Ÿ–ผ๏ธ Media
S
Luca Soldaini ๐ŸŽ€
@soldni
๐Ÿ“…
Aug 28, 2023
983d ago
๐Ÿ†”49694050

Please no more radial plots! โ€ข linear improvement, but area grows quadratically โžก๏ธ overestimate perf โ€ข hard to label axis โžก๏ธ no quantitative use โ€ข overlapping colors โžก๏ธ poor accessibility consider a bar chart or a table instead ๐Ÿ™ https://t.co/NrztmjDRVn

Media 1Media 2
โค๏ธ155
likes
๐Ÿ”17
retweets
๐Ÿ–ผ๏ธ Media
L
LlamaIndex ๐Ÿฆ™
@llama_index
๐Ÿ“…
Aug 29, 2023
983d ago
๐Ÿ†”37862664

Our latest release: LLM Finetuning Abstractions ๐Ÿ”ฅ We provide abstractions on top of @OpenAIโ€™s finetuning API that makes it seamless to plugin a fine-tuned model with your RAG app in @llama_index. Can also easily distill another LLM to gpt-3.5-turbo: https://t.co/ETLaln1N8a https://t.co/wR1KFGPCnC

Media 1
โค๏ธ122
likes
๐Ÿ”26
retweets
๐Ÿ–ผ๏ธ Media
O
elvis
@omarsar0
๐Ÿ“…
Aug 28, 2023
983d ago
๐Ÿ†”83932379

๐ŸŽ“ML Papers of The Week (August Edition) ICYMI, we highlight some of the top trending ML papers every week. This is now used by 1000s of researchers and practitioners to follow and discover trending papers and AI topics. The August collection is now finished! We also add quick summaries of the papers and work with our community to write explainers for outstanding papers. We use a combination of AI-powered tools, analytics, and human curation to build the lists of papers. Check it out here: https://t.co/Ffrj4b12zX

Media 1
โค๏ธ390
likes
๐Ÿ”87
retweets
๐Ÿ–ผ๏ธ Media