Announcing V3.0 Tagging With Up to 45 % More Classes

We are happy to announce the upcoming update of our Tagging technology! We have been looking to make this change so that you can rely more on your API calls and get more precise results to help you better analyze and classify your visual data.

The updated version will become active on 28th of August and you can expect up to 45% more classes* with overall 15% improved precision rate and 30% better recall.

NEW vs. OLD Tagging Comparison

Because we believe that you need to see it with your eyes rather than just hear about it and test it on your own, we have decided to make this comparison with one of our Demo images.

Clearly in this case you can notice 46% more classes and the clear improvement of the precision in keywords accuracy. Prior to the update the most significant keyword was insect, while now it is dragonfly, small increase in recall but significant increase in intrinsic value.

Important For Current Users:

Our old tagging version will remain the default one for a period of 14 days after the launch. To upgrade you will need to change your endpoint parameter to version=3. Follow this example if you are uncertain how to do this.

If your current request url looks like this:

http://api.imagga.com/v1/tagging?url=http://pbs.twimg.com/profile_images/687354253371772928/v9LlvG5N.jpg

To make a request to the new version, it must looks like this:

http://api.imagga.com/v1/tagging?url=http://pbs.twimg.com/profile_images/687354253371772928/v9LlvG5N.jpg&version=3

After this, for a period of 7 days the new Tagging version will become the default one, but all users will still be able to make requests to the old version by adding the parameter for version=2. At the end of that period there will be only version 3 and you won't be able to add version parameters to your calls.

What is your experience with version 2 and 3? Did you find much of a difference when testing with your dataset?

* Some classes may have changed names.


free image recognition with imagga


Image Detection Bots and Imagga

Looks like 2016 will be the year of bots and almost all of them pretend to use some form of machine learning to do their job even better. Microsoft just released an image detection bot called CaptionBot. As all related to image recognition via machine learning, the bot does quite a good job detecting some concepts and quite a bad when it comes to others.

We always find it fun to compare image recognition services and see how Imagga performs with photos used by users. This blog post was provoked by a friend who posted this on Facebook:

It’s about an article on TNW about Microsoft CaptionBot written by Natt Garun and it’s failure to deliver consistent results. We love Microsoft and even use Azure and Microsoft's Translation tools to improve the language support for tags, but it’s quite tempting to run the test and share.

We’ve run the same photos included in the article through Imagga API demo, and that’s what our image recognition thinks about it. It’s still not perfect! Have fun exploring the results.

The first photo is of a home metal hook:

Metal Hook

We've nailed that one, seriously!

Asian girl in the gardenObvious description of the photo will be something like Beautiful Asian girl siting in a vegie garden. We are quite close to it with tags like yard, grass, outdors, people. Well, we didn’t figure the rase properly, but that’s something we are cureenlty working on and lots of imporvements coming in couple of months.

Asian Girl

There’s something about that photo that makes it difficult for the tech to figure out. We didn’t go exeptionally good job here. Well, at least we tagged with happy, witch is obvious from the shaning face of that Asian woman.

Centipede

It’s obvious we do well with living creatures. Thanks good we do not need to figure out if they are male or female and how happy they are.

Darth Vader

This one is quite dificult as it brings alot of contect, that the machine need to know about. We haven’t trained for movie characters, but tags are in general good. Well, we can argue about the guitar - if you have wild imagination you can definately picture Darth Vader with guitar. Human do not apply much in this case, and we are happy with the low confidence level ;-)

Man Fetish

Ever wondered where do people come up with ideas for testing? This one is classic. Definately no surfboard and yes, it’s an attractive fashionable male model ;-)

Robot

Toy elephents may look like that in the forseeble future, but to us this looks like a man robot. Automation is all what robots are predestened for, right?

Man on the Grass

Can’t agree more that the person laying in the grass looks like an alligator. Tags like summer and grass bring a bit of balance to the results. If not accurate at least a bit closer then a cow laying down in a grassy field.

 

Butter

This one is quite though. What’s quite visible for a person to see, it’s not so visible for the machine, who’s lacking the context of the two mager objects on this photos.

Copy machine

It’s great idea to put bed next to any desk in the office. May be one next to the copy machine will not be that good idea - to much noice and traffic arround it.

Brown Mask

We’ve tried to describe this picture and results are not too bad ;-)

Gay Couple

Last but not least, we do not see donuts but happy, male cople in love. Well, we’ve got a bit of spicy results with less confidence as wife and caucasian, but still…


Tinder, Tigers, and... Tagging

It's sometimes striking how fast some crazy ideas go viral nowadays... and the tiger selfies on Tinder :) make no exception, going far enough to escalate to the attention of the New York State Assembly.

Funny (or not), taking selfies with tigers, and eventually using them as profile pictures, is not allowed in NY any more, unless you want to pay the $500 fine for a few seconds of glory.

Being an image recognition company, especially focused on auto-tagging, it seems to us like a piece of cake to detect if there is a tiger on the image! Here is an example:

 

Tinder, Tigers, Tagging
Tinder, tigers, and tagging

You can see the sample test here and upload/paste a tiger selfie (not yours of course) to try it out.

And this could be used via API access by Tinder to warn users... or by the state assembly to detect them ;)

Just another (weird) validation how widespread image recognition could be nowadays... In any types of apps and use-cases, as images are becoming the ultimate way to communicate and express ourselves so easily, thanks to the cameras in our pockets.

Have an awesome idea that would greatly benefit from automated image tagging? Just sign up for free here and give our auto-tagging API a try.