Looks like 2016 will be the year of bots and almost all of them pretend to use some form of machine learning to do their job even better. Microsoft just released an image detection bot called CaptionBot. As all related to image recognition via machine learning, the bot does quite a good job detecting some concepts and quite a bad when it comes to others.
We always find it fun to compare image recognition services and see how Imagga performs with photos used by users. This blog post was provoked by a friend who posted this on Facebook:
It’s about an article on TNW about Microsoft CaptionBot written by Natt Garun and it’s failure to deliver consistent results. We love Microsoft and even use Azure and Microsoft's Translation tools to improve the language support for tags, but it’s quite tempting to run the test and share.
We’ve run the same photos included in the article through Imagga API demo, and that’s what our image recognition thinks about it. It’s still not perfect! Have fun exploring the results.
The first photo is of a home metal hook:
We've nailed that one, seriously!
Obvious description of the photo will be something like Beautiful Asian girl siting in a vegie garden. We are quite close to it with tags like yard, grass, outdors, people. Well, we didn’t figure the rase properly, but that’s something we are cureenlty working on and lots of imporvements coming in couple of months.
There’s something about that photo that makes it difficult for the tech to figure out. We didn’t go exeptionally good job here. Well, at least we tagged with happy, witch is obvious from the shaning face of that Asian woman.
It’s obvious we do well with living creatures. Thanks good we do not need to figure out if they are male or female and how happy they are.
This one is quite dificult as it brings alot of contect, that the machine need to know about. We haven’t trained for movie characters, but tags are in general good. Well, we can argue about the guitar - if you have wild imagination you can definately picture Darth Vader with guitar. Human do not apply much in this case, and we are happy with the low confidence level ;-)
Ever wondered where do people come up with ideas for testing? This one is classic. Definately no surfboard and yes, it’s an attractive fashionable male model ;-)
Toy elephents may look like that in the forseeble future, but to us this looks like a man robot. Automation is all what robots are predestened for, right?
Can’t agree more that the person laying in the grass looks like an alligator. Tags like summer and grass bring a bit of balance to the results. If not accurate at least a bit closer then a cow laying down in a grassy field.
This one is quite though. What’s quite visible for a person to see, it’s not so visible for the machine, who’s lacking the context of the two mager objects on this photos.
It’s great idea to put bed next to any desk in the office. May be one next to the copy machine will not be that good idea - to much noice and traffic arround it.
We’ve tried to describe this picture and results are not too bad ;-)
Last but not least, we do not see donuts but happy, male cople in love. Well, we’ve got a bit of spicy results with less confidence as wife and caucasian, but still…
by Chris Georgiev
We are excited to have our API available for out-of-the-box development in one more major platform – Python!
Ivan Penchev was enthusiastic to port the PHP client in Python and no sooner said than done – here it is https://github.com/ivanpenchev/imagga-py.
In no time Georgi Kostadinov put it at work, and what a better way to try it than prototyping basic tool that searches among recent Instagram photos based on colors, using Imagga Color API. He was kind to publish his code as well: https://github.com/gkostadinov/Instagram-Color-Search-Python.
Instagram Color Search prototype
Thank you guys, you’re awesome!
You can try our APIs by applying for trial account at www.imagga.com. Currently we have PHP, Java, Ruby and Python clients for our APIs.