Your curated collection of saved posts and media

Showing 32 posts ยท last 14 days ยท by score
V
vllm_project
@vllm_project
๐Ÿ“…
Dec 08, 2025
125d ago
๐Ÿ†”33680574

๐ŸŽ‰Congrats to the @Zai_org team on the launch of GLM-4.6V and GLM-4.6V-Flash โ€” with day-0 serving support in vLLM Recipes for teams who want to run them on their own GPUs. GLM-4.6V focuses on high-quality multimodal reasoning with long context and native tool/function calling, while GLM-4.6V-Flash is a 9B variant tuned for lower latency and smaller-footprint deployments; our new vLLM Recipe ships ready-to-run configs, multi-GPU guidance, and production-minded defaults. If youโ€™re building inference services and want GLM-4.6V in your stack, start here: https://t.co/NhHT6iey6C

@Zai_org โ€ข Mon Dec 08 12:14

GLM-4.6V Series is here๐Ÿš€ - GLM-4.6V (106B): flagship vision-language model with 128K context - GLM-4.6V-Flash (9B): ultra-fast, lightweight version for local and low-latency workloads First-ever native Function Calling in the GLM vision model family Weights: https://t.co/vKmNo

Media 1
๐Ÿ–ผ๏ธ Media
A
asapzzhou
@asapzzhou
๐Ÿ“…
Dec 08, 2025
125d ago
๐Ÿ†”27770210

(1/n) Tiny-A2D: An Open Recipe to Turn Any AR LM into a Diffusion LM Code (dLLM): https://t.co/Nv7d1t8Qin Checkpoints: https://t.co/rpibkb2Xfq With dLLM, you can turn ANY autoregressive LM into a diffusion LM (parallel generation + infilling) with minimal compute. Using this recipe, we built a ๐Ÿค—collection of the smallest diffusion LMs that work well in practice. Key takeaways: 1. Finetuned on Qwen3-0.6B, we obtain the strongest small (~0.5/0.6B) diffusion LMs to date. 2. The base AR LM matters: Investing compute in improving the base AR model is potentially more efficient than scaling compute during adaptation. 3. Block diffusion (BD3LM) generally outperforms vanilla masked diffusion (MDLM), especially on math-reasoning and coding tasks.

Media 2
+1 more
๐Ÿ–ผ๏ธ Media
๐Ÿ”ai_fast_track retweeted
A
Zhanhui Zhou @ NeurIPS
@asapzzhou
๐Ÿ“…
Dec 08, 2025
125d ago
๐Ÿ†”27770210

(1/n) Tiny-A2D: An Open Recipe to Turn Any AR LM into a Diffusion LM Code (dLLM): https://t.co/Nv7d1t8Qin Checkpoints: https://t.co/rpibkb2Xfq With dLLM, you can turn ANY autoregressive LM into a diffusion LM (parallel generation + infilling) with minimal compute. Using this recipe, we built a ๐Ÿค—collection of the smallest diffusion LMs that work well in practice. Key takeaways: 1. Finetuned on Qwen3-0.6B, we obtain the strongest small (~0.5/0.6B) diffusion LMs to date. 2. The base AR LM matters: Investing compute in improving the base AR model is potentially more efficient than scaling compute during adaptation. 3. Block diffusion (BD3LM) generally outperforms vanilla masked diffusion (MDLM), especially on math-reasoning and coding tasks.

Media 1
โค๏ธ204
likes
๐Ÿ”49
retweets
๐Ÿ–ผ๏ธ Media
J
JinaAI_
@JinaAI_
๐Ÿ“…
Dec 08, 2025
125d ago
๐Ÿ†”43190481

Releasing jina-VLM: our new 2B vision language model achieves SOTA on multilingual visual question answering and document understanding among open 2B-scale VLMs. https://t.co/QDZvAt6Wux

๐Ÿ–ผ๏ธ Media
G
GithubProjects
@GithubProjects
๐Ÿ“…
Dec 08, 2025
125d ago
๐Ÿ†”48201165

A toolkit for building agents that watch, listen, and understand video. Low latency by design. Open source. Production ready. Vision Agents lets you build real time video AI that works with your models and your edge layer. Supports YOLO, Moondream, Cartesia, Deepgram, ElevenLabs, HeyGen, Gemini, OpenAI, and more. Quick model switching. Easy to use API. Perfect for coaching tools, collaboration apps, avatars, and robotics.

๐Ÿ–ผ๏ธ Media
P
ph_singer
@ph_singer
๐Ÿ“…
Dec 09, 2025
124d ago
๐Ÿ†”42601357

@prior_labs Job openings: https://t.co/mbp8ZG4RKj

Media 1
๐Ÿ–ผ๏ธ Media
C
ClementDelangue
@ClementDelangue
๐Ÿ“…
Dec 09, 2025
124d ago
๐Ÿ†”35448914

I don't think there's a more diverse and international platform in AI than @huggingface! Current trending models are coming from all over the world in all sorts of modalities & sizes. That is AI maturing at the speed of light! https://t.co/N0hMmFMZfG

Media 1
๐Ÿ–ผ๏ธ Media
N
novita_labs
@novita_labs
๐Ÿ“…
Dec 08, 2025
125d ago
๐Ÿ†”14803628

๐Ÿค— Give GLMโ€‘4.6V a try on @huggingface , supported by Novita. https://t.co/Ps4awZWZRn

๐Ÿ–ผ๏ธ Media
๐Ÿ”huggingface retweeted
N
Novita AI
@novita_labs
๐Ÿ“…
Dec 08, 2025
125d ago
๐Ÿ†”14803628

๐Ÿค— Give GLMโ€‘4.6V a try on @huggingface , supported by Novita. https://t.co/Ps4awZWZRn

โค๏ธ11
likes
๐Ÿ”4
retweets
๐Ÿ–ผ๏ธ Media
O
oodhamboi
@oodhamboi
๐Ÿ“…
Dec 07, 2025
126d ago
๐Ÿ†”23497749

may I present https://t.co/9i3jTgUIgn

Media 1
๐Ÿ–ผ๏ธ Media
P
plantart_id
@plantart_id
๐Ÿ“…
Dec 05, 2025
128d ago
๐Ÿ†”66987934

Anyone who skips @billions_ntwk will regret it. This network is made for real humans, not the noise. I'm locked on $BILL @jgonzalezferrer https://t.co/GY81iY4VCD

Media 1Media 2
๐Ÿ–ผ๏ธ Media
G
gerardsans
@gerardsans
๐Ÿ“…
Dec 09, 2025
124d ago
๐Ÿ†”72699769

Workshop confirmed! Level up your AI creative skills at DevFest Cairo. Discover how to blend image editing, voice, and video into stunning assets using Google AI. Feeling bold? Bring your best profile photos, the stage is yours. #GoogleDeveloperExpert #AI https://t.co/bL8upyqhgb

Media 1Media 2
๐Ÿ–ผ๏ธ Media
O
omarsar0
@omarsar0
๐Ÿ“…
Dec 09, 2025
124d ago
๐Ÿ†”81361813

I love this figure from Anthropic's new talk on "Skills > Agents". Here are my notes: The more skills you build, the more useful Claude Code gets. And it makes perfect sense. Procedural knowledge and continuous learning for the win! Skills essentially are the way you make Claude Code more knowledgeable over time. This is why I had argued that Skills is a good name for this functionality. Claude Code acquires new capabilities from domain experts (they are the ones building skills). Claude Code can evolve the skills as needed and forget the ones it doesn't need anymore. It's a collaborative effort, which can easily be expanded to entire teams, communities, and orgs (via plugins). Skills are particularly useful for workflows where information and requirements constantly change. Finance, code, science, and human-in-the-loop workflows are all great use cases for Skills. You can build new Skills using the built-in skill creation tool, so you are always building new skills with all the best practices. Or you can do what I did, which is build my own skill creator to build custom skills catered to the work I do. Just more levels of customization that Skills also enables. Skills flexibility enables future capabilities to be easily integrated everywhere. Competitors don't have anything remotely close to this type of ecosystem. The deep understanding of Anthropic engineers on the importance of better context management tools and agent harnesses is something to admire. Very bullish on Claude Code.

Media 1
๐Ÿ–ผ๏ธ Media
P
ProjNativeHope
@ProjNativeHope
๐Ÿ“…
May 02, 2022
1440d ago
๐Ÿ†”48292866

"This book changed me. We are takers. We take from each other. We take from the animals. We take from the land..." #nativeamericans https://t.co/ElnFZUjmCK https://t.co/rDRALlbcbP

Media 1Media 2
๐Ÿ–ผ๏ธ Media
N
Native3rd
@Native3rd
๐Ÿ“…
Sep 04, 2023
951d ago
๐Ÿ†”19412609

โ€œWe depend on nature not only for our physical survival, we also need nature to show us the way home, the way out of the prison of our own minds.โ€ ~ E. Tolle, https://t.co/AcqZ03kp5o

Media 1Media 2
๐Ÿ–ผ๏ธ Media
I
IllumiNative
@IllumiNative
๐Ÿ“…
Aug 30, 2024
590d ago
๐Ÿ†”24284280

If youโ€™re traveling to a #nationalpark this summer, youโ€™re traveling to #Nativelands. Today, the @NatlParkService is reconciling with its past by collaborating with Tribal communities to maintain healthy ecosystems for future generations. Learn more: https://t.co/YKSU85Zafl https://t.co/DrZnFzVwxn

Media 1Media 2
+1 more
๐Ÿ–ผ๏ธ Media
D
desk_coins
@desk_coins
๐Ÿ“…
Dec 06, 2025
127d ago
๐Ÿ†”63637858

https://t.co/4RyTAQkRkf - Rharos Network Teams Launch Native Rowa Loan Lending Barrier #NFTNews

Media 1
๐Ÿ–ผ๏ธ Media
J
jxnlco
@jxnlco
๐Ÿ“…
Dec 09, 2025
124d ago
๐Ÿ†”85435322

Slow day have signal tho. https://t.co/qfxdOk6ZpG

Media 2
+6 more
๐Ÿ–ผ๏ธ Media
Q
q2q2cc
@q2q2cc
๐Ÿ“…
Dec 02, 2025
131d ago
๐Ÿ†”72995075

[๊ณ„์†ํ•ด์„œ ๋ณด๊ธ‰๋˜๋Š” ๋…๋„ ๋‹ค์˜ค์˜ ์•ผํ•‘ @APEPE_MEME ] ๊ตญ๋‚ด ๊ฑฐ๋ž˜์†Œ์—์„œ๋„ ์ƒ์žฅ๋˜์–ด ์žˆ๋Š” APEPE์ž…๋‹ˆ๋‹ค. ๋ฐˆํ† ํฐ ๋‘๊ฐœ๊ฐ€ ๊ฒฐํ•ฉํ•œ ํ”„๋กœ์ ํŠธ๋กœ ์ตœ๊ทผ ๋น ๋ฅด๊ฒŒ ์„ฑ์žฅํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค! ์ด๋ฒคํŠธ ๋ณด์ƒ์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค! ์ƒ์œ„ 80๋ช…: ์ตœ์ข… ๋ฆฌ๋”๋ณด๋“œ ์ˆœ์œ„๋กœ ๋ณด์ƒ์„ ์ฐจ๋“ฑ ์ง€๊ธ‰ ์ฐธ์—ฌ์ž ๊ธ€๋ฆผ ๋ž˜ํ”Œ: ์น˜ํ‚จ 5๋งˆ๋ฆฌ, ์ปคํ”ผ 100์ž” 2์ฃผ ์กฐ๊ธˆ ์•ˆ๋˜๋Š” ์‹œ๊ฐ„์ด๊ณ  ๋ณด์ƒํ’€๋„ ์ด 5,000USDT์ด๋‹ˆ ๊ผญ ์ฐธ์—ฌํ•ด๋ณด์‹œ๊ธธ ๋ฐ”๋ž๋‹ˆ๋‹น! ๊ฐœ์ธ์ ์œผ๋กœ๋Š” ๊ธ€ ์ฃผ์ œ๊ฐ€ ์ƒ๊ฐ์ด ์•ˆ๋‚˜๊ธด ํ•˜๋Š”๋ฐ... ๊ธฐ์กด ์•ผํ•‘์ชฝ๊ณผ ์—ฎ์–ด๋ณด๋˜์ง€ ํ•ด์•ผ ํ•  ๊ฑฐ ๊ฐ™์Šต๋‹ˆ๋‹ค!(๋ฌผ๋ก  ๋‹ค์ค‘ ํƒœ๊ทธ๋Š” ์•ˆ๋ฉ๋‹ˆ๋‹น)

Media 1
๐Ÿ–ผ๏ธ Media
Q
q2q2cc
@q2q2cc
๐Ÿ“…
Dec 03, 2025
130d ago
๐Ÿ†”13584645

[@APEPE_MEME ์•ผํ•‘ํ•˜์‹œ๋Š” ๋ถ„๋“ค์€ ์ด๊ฑธ ์‚ฌ์šฉํ•ด๋ณด์‹œ๋ฉด ์ข‹์„ ๊ฒƒ ๊ฐ™์•„์š”] ํ”„๋ฆฌ๋ฏธ์—„ -> X Pro์—์„œ include:nativeretweets (filter:self_threads OR -filter:nativeretweets -filter:retweets -filter:replies) @APEPE_MEME filter:blue_verified lang:ko ๊ฒ€์ƒ‰์ฐฝ์— ์ด๊ฑธ ์ž…๋ ฅํ•˜์‹œ๋ฉด ๊ฐ™์€ ํ”„๋กœ์ ํŠธ ์•ผํ•‘ํ•˜์‹œ๋Š” ํ•œ๊ตญ ๋ถ„๋“ค ๊ฒ€์ƒ‰ ๊ฐ€๋Šฅํ•ฉ๋‹ˆ๋‹ค ๋ ˆ์ „๋“œ์ฃ ? ์‚ฌ์‹ค ์ €๊ธฐ ํ•ธ๋“ค ์˜†์— or @ ์›ํ•˜๋Š” ํ”„๋กœ์ ํŠธ ์ž…๋ ฅํ•˜๋ฉด ๋ณธ์ธ์ด ์•ผํ•‘ํ•˜๋Š” ํ”„๋กœ์ ํŠธ๋“ค ๊ฐ™์ด ๊ฒธํ•ด์„œ ๊ฒ€์ƒ‰ ๊ฐ€๋Šฅ. ์ด๋Ÿฌ๋ฉด ์ข‹์€ ๊ฒŒ ๊ฐ™์ด ์ƒํ˜ธ๋ฐ›๊ธฐ๋„ ์‰ฝ๊ณ  ์ƒํ˜ธ ํ•˜๊ธฐ๋„ ์‰ฝ์Šต๋‹ˆ๋‹น. ์ €๋Š” ์‚ฌ์‹ค @DOKDODAO ์•ผํ•‘์€ ๋‹ค ์ด๋ ‡๊ฒŒ ์ƒํ˜ธํ•ฉ๋‹ˆ๋‹ค ๋„ˆ๋ฌด ๋งŽ์•„์„œ ๋ฌธ์ œ์ง€ ใ…‹ใ…‹

@q2q2cc โ€ข Tue Dec 02 04:08

[๊ณ„์†ํ•ด์„œ ๋ณด๊ธ‰๋˜๋Š” ๋…๋„ ๋‹ค์˜ค์˜ ์•ผํ•‘ @APEPE_MEME ] ๊ตญ๋‚ด ๊ฑฐ๋ž˜์†Œ์—์„œ๋„ ์ƒ์žฅ๋˜์–ด ์žˆ๋Š” APEPE์ž…๋‹ˆ๋‹ค. ๋ฐˆํ† ํฐ ๋‘๊ฐœ๊ฐ€ ๊ฒฐํ•ฉํ•œ ํ”„๋กœ์ ํŠธ๋กœ ์ตœ๊ทผ ๋น ๋ฅด๊ฒŒ ์„ฑ์žฅํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค! ์ด๋ฒคํŠธ ๋ณด์ƒ์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค! ์ƒ์œ„ 80๋ช…: ์ตœ์ข… ๋ฆฌ๋”๋ณด๋“œ ์ˆœ์œ„๋กœ ๋ณด์ƒ์„ ์ฐจ๋“ฑ ์ง€๊ธ‰ ์ฐธ์—ฌ์ž ๊ธ€๋ฆผ ๋ž˜ํ”Œ: ์น˜ํ‚จ 5๋งˆ๋ฆฌ, ์ปคํ”ผ 100์ž” 2์ฃผ ์กฐ๊ธˆ ์•ˆ๋˜๋Š” ์‹œ๊ฐ„์ด๊ณ  ๋ณด์ƒํ’€๋„ ์ด 5,000USDT์ด๋‹ˆ ๊ผญ ์ฐธ์—ฌํ•ด๋ณด์‹œ๊ธธ ๋ฐ”๋ž๋‹ˆ๋‹น! ๊ฐœ์ธ์ ์œผ๋กœ๋Š” ๊ธ€ ์ฃผ์ œ๊ฐ€ ์ƒ๊ฐ์ด ์•ˆ๋‚˜๊ธด ํ•˜๋Š”๋ฐ... ๊ธฐ์กด ์•ผํ•‘์ชฝ๊ณผ ์—ฎ์–ด๋ณด๋˜์ง€ ํ•ด์•ผ ํ•  ๊ฑฐ

Media 1
๐Ÿ–ผ๏ธ Media
Z
zendafatra
@zendafatra
๐Ÿ“…
Dec 03, 2025
130d ago
๐Ÿ†”83114471

Native. Permissionless. Instant. its only happen @RialoHQ so, welcome!! https://t.co/t1zWZ87l2E

Media 1Media 2
๐Ÿ–ผ๏ธ Media
G
gerardsans
@gerardsans
๐Ÿ“…
Dec 09, 2025
124d ago
๐Ÿ†”85993553

Bold Lacanian read on AI hallucination, but the analogy leans on heavy anthropomorphic baggage. All LLM outputs start the same: every token is just next token prediction. A continuation becomes a hallucination only when a human adds real world context the model never had. There is no psyche trying to fill a lack. Personality in LLMs is RLHF rewarding fluency, not truth. Apparent traits are prompt shaped data artefacts as in Han et al 2025 arXiv:2509.03730. Self reported Big Five maps to behaviour in about 24 percent of cases. This is a stochastic funnel, not a barred subject. The confidence in hallucinations is not Lacanian jouissance. It is the Eliza effect. We project coherence and intention, then blame the model for a mismatch created by our own projection. Keep the poetic mirror, but mark where it stops explaining and starts flattering our desire to see a mind in a transformer. Great paper, but it needs a reminder to flag every anthropomorphic move with the actual technical context. Call out when you are interpreting output after the fact, not describing how it was produced, and avoid projecting human traits that do not exist. Follow for more insights or subscribe to receive updates in your inbox: https://t.co/DybOvoBDEw

Media 1
๐Ÿ–ผ๏ธ Media
G
gerardsans
@gerardsans
๐Ÿ“…
Dec 09, 2025
124d ago
๐Ÿ†”73527631

Bold Lacanian read on AI hallucination, but the analogy leans on heavy anthropomorphic baggage. All LLM outputs start the same: every token is just next token prediction. A continuation becomes a hallucination only when a human adds real world context the model never had. There is no psyche trying to fill a lack. Personality in LLMs is RLHF rewarding fluency, not truth. Apparent traits are prompt shaped data artefacts as in Han et al 2025 arXiv:2509.03730. Self reported Big Five maps to behaviour in about 24 percent of cases. This is a stochastic funnel, not a barred subject. The confidence in hallucinations is not Lacanian jouissance. It is the Eliza effect. We project coherence and intention, then blame the model for a mismatch created by our own projection. Keep the poetic mirror, but mark where it stops explaining and starts flattering our desire to see a mind in a transformer. Great paper, but it needs a reminder to flag every anthropomorphic move with the actual technical context. Call out when you are interpreting output after the fact, not describing how it was produced, and avoid projecting human traits that do not exist. Follow for more insights or subscribe to receive updates in your inbox: https://t.co/DybOvoBDEw

Media 1
๐Ÿ–ผ๏ธ Media
G
gerardsans
@gerardsans
๐Ÿ“…
Dec 09, 2025
124d ago
๐Ÿ†”14201175

Bold Lacanian read on AI hallucination, but the analogy leans on heavy anthropomorphic baggage. All LLM outputs start the same: every token is just next token prediction. A continuation becomes a hallucination only when a human adds real world context the model never had. There is no psyche trying to fill a lack. Personality in LLMs is RLHF rewarding fluency, not truth. Apparent traits are prompt shaped data artefacts as in Han et al 2025 arXiv:2509.03730. Self reported Big Five maps to behaviour in about 24 percent of cases. This is a stochastic funnel, not a barred subject. The confidence in hallucinations is not Lacanian jouissance. It is the Eliza effect. We project coherence and intention, then blame the model for a mismatch created by our own projection. Keep the poetic mirror, but mark where it stops explaining and starts flattering our desire to see a mind in a transformer. Great paper, but it needs a reminder to flag every anthropomorphic move with the actual technical context. Call out when you are interpreting output after the fact, not describing how it was produced, and avoid projecting human traits that do not exist. Follow for more insights or subscribe to receive updates in your inbox: https://t.co/DybOvoBDEw

Media 1
๐Ÿ–ผ๏ธ Media
G
gerardsans
@gerardsans
๐Ÿ“…
Dec 09, 2025
124d ago
๐Ÿ†”50257093

Bold Lacanian read on AI hallucination, but the analogy leans on heavy anthropomorphic baggage. All LLM outputs start the same: every token is just next token prediction. A continuation becomes a hallucination only when a human adds real world context the model never had. There is no psyche trying to fill a lack. Personality in LLMs is RLHF rewarding fluency, not truth. Apparent traits are prompt shaped data artefacts as in Han et al 2025 arXiv:2509.03730. Self reported Big Five maps to behaviour in about 24 percent of cases. This is a stochastic funnel, not a barred subject. The confidence in hallucinations is not Lacanian jouissance. It is the Eliza effect. We project coherence and intention, then blame the model for a mismatch created by our own projection. Great paper, but it needs a reminder to flag every anthropomorphic move with the actual technical context. Call out when you are interpreting output after the fact, not describing how it was produced, and avoid projecting human traits that do not exist. Follow for more insights or subscribe to receive updates in your inbox: https://t.co/DybOvoBDEw

Media 1
๐Ÿ–ผ๏ธ Media
B
banana_ventures
@banana_ventures
๐Ÿ“…
Dec 05, 2025
128d ago
๐Ÿ†”45681298

@AndrewYang AI = actually Indians The encrapification of all things will continue. https://t.co/jps8ooTg42

Media 1Media 2
+2 more
๐Ÿ–ผ๏ธ Media
A
AndyTheDonOne
@AndyTheDonOne
๐Ÿ“…
Dec 05, 2025
128d ago
๐Ÿ†”75678339

@narindertweets You know how racism works. You're a pro.... https://t.co/ZgCzYuYVz6

Media 1Media 2
+2 more
๐Ÿ–ผ๏ธ Media
S
sciencegirl
@sciencegirl
๐Ÿ“…
Dec 09, 2025
124d ago
๐Ÿ†”92107929

A stunning drone and fireworks show lighting up Guangzhou at night https://t.co/cY9Ot5MmMm

๐Ÿ–ผ๏ธ Media
๐Ÿ”Scobleizer retweeted
S
Science girl
@sciencegirl
๐Ÿ“…
Dec 09, 2025
124d ago
๐Ÿ†”92107929

A stunning drone and fireworks show lighting up Guangzhou at night https://t.co/cY9Ot5MmMm

โค๏ธ82
likes
๐Ÿ”11
retweets
๐Ÿ–ผ๏ธ Media
I
IllumiNative
@IllumiNative
๐Ÿ“…
Apr 18, 2023
1090d ago
๐Ÿ†”26804740

As the first Native player in the @NWSL, @gohaam, who is Navajo, San Felipe Pueblo & Black, is making history. But who made Madison into the player & person she has become? The army of women who raised her, she says. Watch the full doc on YouTube (link in bio). https://t.co/LEatnhgd6S

๐Ÿ–ผ๏ธ Media
G
GoingBallistic5
@GoingBallistic5
๐Ÿ“…
Dec 09, 2025
124d ago
๐Ÿ†”54413738

@Scobleizer @Wassieweb3 @autkast @briansolis @ServiceNow https://t.co/L7vgYmVTub

Media 1
๐Ÿ–ผ๏ธ Media
_
_adishj
@_adishj
๐Ÿ“…
Dec 08, 2025
125d ago
๐Ÿ†”38266487

today, we're launching Mosaic Avatars. create realistic AI UGC content with natural looking personas, dynamic movement, and product placement. comment "FREEGEN" to get unlimited free generations for the next 2 days. this release comes with 3 key features. https://t.co/i6VEzFIfNn

๐Ÿ–ผ๏ธ Media