@strickvl
I've been building panlabel ā a fast Rust CLI that converts between dataset annotation formats ā and I'm a few releases behind on sharing updates. Here's a quick catch-up. v0.3.0 added Hugging Face ImageFolder support, including remote Hub import via --hf-repo. You can point it at a HF dataset repo and it figures out the layout (metadata.jsonl, parquet shards, even zip-style splits that contain YOLO or COCO inside). v0.4.0 overhauled auto-detection so it gives you concrete evidence when format detection is ambiguous ("found YOLO labels/ but missing images/") instead of a generic error. Also added Docker images. v0.5.0 brought split-aware YOLO reading for Roboflow/Ultralytics Hub exports and conversion report explainability ā every adapter now explains its deterministic policies so you know exactly what happens to your data. v0.6.0 is the big one. Five new format adapters: ā LabelMe JSON (per-image, with polygon-to-bbox envelope) ā Apple CreateML JSON (center-based coords) ā KITTI (autonomous driving standard ā 15 fields per line) ā VGG Image Annotator (VIA) JSON ā RetinaNet Keras CSV That brings panlabel to 13 supported formats with full read, write, and auto-detection. Also in v0.6.0: YOLO confidence token support, dry-run mode for previewing conversions, and content-based CSV detection. Single binary, no Python dependencies. Install via pip, brew, cargo, or grab a pre-built binary from GitHub releases. This is the kind of project I enjoy just steadily plodding away at ā ticking off one format at a time until every common object detection annotation format is covered. Still sticking with detection bboxes for now, but the format list keeps growing. #ObjectDetection #Rust #MachineLearning #ComputerVision #OpenSource