@dasanaike
Social scientists working with materials requiring digitization can only study what machines can read. In practice, that means printed Latin-script documents from well-funded archives. In a new working paper, I show that Vision Language Models used zero-shot outperform every existing OCR system across every script evaluated, and I propose a pipeline for deploying them on new collections. I apply it to six archival collections spanning 1.8 million pages across six countries for under $1,900.