Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Hey all - OP here. We're not affiliated with Ultralytics or the other researchers. We're a startup that enables developers to use computer vision without being machine learning experts, and we support a wide array of open source model architectures for teams to try on their data: https://models.roboflow.ai

Beyond that, we're just fans. We're amazed by how quickly the field is moving and we did some benchmarks that we thought other people might find as exciting as we did. I don't want to take a side in the naming controversy. Our core focus is helping developers get data into any model, regardless of its name!



YOLOv5 seems to have one important advantage over v4, which your post helped highlight:

Fourth, YOLOv5 is small. Specifically, a weights file for YOLOv5 is 27 megabytes. Our weights file for YOLOv4 (with Darknet architecture) is 244 megabytes. YOLOv5 is nearly 90 percent smaller than YOLOv4. This means YOLOv5 can be deployed to embedded devices much more easily.

Naming controversy aside, it's nice to have some model that can get close to the same accuracy at 10% of the size.

Naming it v5 was certainly ... bold ... though. If it can't outperform v4 in any scenario, is it really worthy of the name? (On the other hand, if v5 can beat v4 in inference time or accuracy, that should be highlighted somewhere.)

FWIW I doubt anyone who looks into this will think roboflow had anything to do with the current controversies. You just showed off what someone else made, which is both legit and helpful. It's not like you were the ones that named it v5.

On the other hand... visiting https://models.roboflow.ai/ does show YOLOv5 as "current SOTA", with some impressive-sounding results:

SIZE: YOLOv5 is about 88% smaller than YOLOv4 (27 MB vs 244 MB)

SPEED: YOLOv5 is about 180% faster than YOLOv4 (140 FPS vs 50 FPS)

ACCURACY: YOLOv5 is roughly as accurate as YOLOv4 on the same task (0.895 mAP vs 0.892 mAP)

Then it links to https://blog.roboflow.ai/yolov5-is-here/ but there doesn't seem to be any clear chart showing "here's v5 performance vs v4 performance under these conditions: x, y, z"

Out of curiosity, where did the "180% faster" and 0.895 mAP vs 0.892 mAP numbers come from? Is there some way to reproduce those measurements?

The benchmarks at https://github.com/WongKinYiu/CrossStagePartialNetworks/issu... seem to show different results, with v4 coming out ahead in both accuracy and speed at 736x736 res. I'm not sure if they're using a standard benchmarking script though.

Thanks for gathering together what's currently known. The field does move fast.


Agreed!

Crucially, we're tracking "out of the box" performance, e.g., if a developer grabbed X model and used it on a sample task, how could they expect it to perform? Further research and evaluation is recommended!

For size, we measured the sizes of our saved weights files for Darknet YOLOv4 versus the PyTorch YOLOv5 implementation.

For inference speed, we checked "out of the box" speed using a Colab Notebook equipped with a Tesla P100. We used the same task[1] for both - e.g. see the YOLOv5 Colab notebook[2]. For Darknet YOLOv4 inference speed, we translated the Darknet weights using the Ultralytics YOLOv3 repo (as we've seen many do for deployments)[3]. (To achieve top YOLOv4 inference speed, one should reconfigure Darknet carefully with OpenCV, CUDA, cuDNN, and carefully monitor batch size.)

For accuracy, we evaluated the task above with mAP after quick training (100 epochs) with the smallest YOLOv5s model against the full YOLOv4 model (using recommended 2000*n, n is classes). Our example is a small custom dataset, and should be investigated on e.g. COCO. 90-classes.

[1] https://public.roboflow.ai/object-detection/bccd [2] https://colab.research.google.com/drive/1gDZ2xcTOgR39tGGs-EZ... [3] https://github.com/ultralytics/yolov3


This is why I have so much doubt. To claim it's better in any meaningful way you need to show it on the same framework, varied datasets, varied input sizes and you should be able to use it in your detection problem and also see some benefits from the previous version.

> SIZE: YOLOv5 is about 88% smaller than YOLOv4 (27 MB vs 244 MB)

Is that a benefit of Darknet vs TF, YOLOv4 vs YOLOv5, or did you win the NN lottery [1]?

> SPEED: YOLOv5 is about 180% faster than YOLOv4 (140 FPS vs 50 FPS)

Again, where does this improvement come from?

> ACCURACY: YOLOv5 is roughly as accurate as YOLOv4 on the same task (0.895 mAP vs 0.892 mAP)

The difference in 0.1% accuracy can be huge, for example the difference between 99.9% and 100% could require an insanely larger neural network. Even much less that 99% accuracy, it seems clear to me that there can still be some limitations on accuracy from neural network size.

For example, if you really don't care so much for accuracy, you can really squeeze the network down [2].

[1] https://ai.facebook.com/blog/understanding-the-generalizatio...

[2] https://arxiv.org/abs/1910.03159


It's about time for Roboflow to pull this article. It seems highly unlikely that a 90 % smaller model would provide a similar accuracy, and the result seems to come from a small custom dataset only. Please make a real COCO comparison instead.

The YoloV5 repo itself shows performance comparable to YoloV3: https://github.com/ultralytics/yolov5#pretrained-checkpoints

Another comparison suggests YoloV5 is slightly WORSE than YoloV4: https://github.com/WongKinYiu/CrossStagePartialNetworks/issu...


> It's about time for Roboflow to pull this article.

The article still adds value by suggesting how one would run the network and in general the site seems to be about collating different networks.

Perhaps a disclaimer could be good, reading something like: "the speed improvements mentioned in this article are currently being tested". As a publisher, when you print somebody else's words, unless quoted, they are said with your authority. The claims are very big and it doesn't feel like enough testing has been done yet to even verify that they hold true.


Very cool business model! How long have you been at it? I've been pushing for a while (unsuccessfully, so far) for the NIH to cultivate a team providing such a service to our many biomedical imaging labs. It seems pretty clear to me that this sort of AI hub model is going to win out in at least the medium term versus spending money on lots of small redundant AI teams each dedicated to a single project. What sort of application sectors have you found success with?


Appreciate it!

Nice, I really respect research coming out of NIH. (Happen to know Travis Hoppe?) Coincidentally, our notebook demo for YOLOv5 is on the blood cell count and detection dataset: https://public.roboflow.ai/object-detection/bccd

We've seen 1000+ different use cases. Some of the most popular are in agriculture (weeds vs crops), industrials / production (quality assurance), and OCR.

Send me an email? joseph at roboflow.ai


Do you know of any battery-wired drones that can pick out invasive plants? I've been looking for this to use on trails but since the plant's sap is highly poisonous, drones seem to be the logical solution.


> We're not affiliated with Ultralytics or the other

> researchers.

Unfortunately I am now unable to edit to reflect this better.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: