@curl_justin
CA AI bills will go into effect soon. Their real-world effect will hinge on how state officials define terms like “frontier models” and “reasonable measures.” In @lawfare, I identify key definitional ambiguities and discuss how officials might resolve them… SB 53, for example, defines a “frontier model” as one trained with more than 10^26 FLOPS. But many developers build on open-weight models like Qwen. If they fine-tune an open-weight model, should they include the pre-training compute for the base model? The statute seems to say yes. But this creates two problems. First, developers often don’t know how much compute was used to train a base model. And second, a cumulative approach might sweep in companies far from the statute’s intended targets. Since Airbnb’s revenues exceeded $500m last year, if it fine-tunes Qwen and the total compute exceeds 10^26 FLOPS, it might technically qualify as a frontier developer. Yet if the statute does NOT take a cumulative approach, developers could circumvent the statute by fine-tuning separate open-weight models. They’d be deploying models with capabilities at or near the frontier with limited oversight. (NOTE: This is also relevant to NY state officials implementing @Sen_Gounardes and @AlexBores's RAISE Act) Other definitional ambiguities exist with CA SB 243’s use of “reasonable measures,” AB 853’s use of “to the extent technically feasible,” and AB 621’s use of “reasonably should know.” Read more on what CA state officials should do next below!