In a recent post on the newly introduced component of Wolfram’s language for image identification ImageIdentify Jordan Novet of Venture Beat conducted a quick test of ImageIdentify against 5 deep learning platforms for image recognition he chose. He selected “10 images from Flickr that seemed to clearly fall into the 1,000 categories used for the 2014 ImageNet visual recognition competition.” and tagged them with ImageIdentify and 5 alternative image recognition services.
As one of the very first platforms as a service, offering such functionality worldwide, we felt we should join this funny experiment and make our humble contribution, by adding the tags Imagga’s image recognition technology generated for the same 10 photos. You can try with your own photos using Imagga online image recognition demo.
We better leave the results speak for themselves. Please take all this with a grain of salt and don’t forget that these results are obtained for just 10 randomly selected photos :)
Imagga: cup, mug, coffee mug, drinking vessel, beverage, punch, container, coffee, drink, vessel
Wolfram ImageIdentify: tea
CamFind: white ceramic mug
Clarifai: coffee cup nobody tea mug cafe hot ceramic coffee cup cutout
MetaMind: Coffee mug
Imagga: vegetable, produce, mushroom, food, fungus, cap, organic, lush, moss, forest
Wolfram ImageIdentify: magic mushroom
CamFind: white mushroom
Clarifai: mushroom fungi fungus toadstool nature grass fall moss forest autumn
Imagga: microphone, spatula, business, turner, black, device, knife, technology, hand, cooking utensil
Wolfram ImageIdentify: spatula
CamFind: black kitchen turner
Clarifai: steel wood knife handle iron fork equipment nobody tool chrome
Imagga: signboard, scoreboard, board
Wolfram ImageIdentify: scoreboard
CamFind: baseball scoreboard
Clarifai: scoreboard soccer stadium football game competition goal group north america match
Imagga: shepherd dog, german shepherd, dog, canine, domestic animal, kelpie, doberman, pinscher, pet, animal
Wolfram ImageIdentify: German shepherd
CamFind: black and brown German shepherd
Clarifai: dog canine cute puppy mammal loyalty grass sheepdog fur German hepherd
MetaMind: German Shepherd, German Shepherd Dog, German Police Dog, Alsatian
Imagga: volleyball, ball, people, man, black, racket, body, person, game equipment, equipment (nice try)
Wolfram ImageIdentify: tufted puffin
CamFind: toucan bird
Clarifai: bird one north america nobody animal people adult nature two outdoors
Imagga: Indian cobra, cobra, snake, thunder snake
Wolfram ImageIdentify: black-necked cobra
CamFind: brown and beige cobra snake
Clarifai: snake nobody reptile cobra wildlife daytime sand rattlesnake north america desert
MetaMind: Indian cobra, Naja Naja
Imagga: berry, strawberry, fruit, edible fruit, produce, food, strawberries, juicy, sweet, dessert
Wolfram ImageIdentify: strawberry
CamFind: red strawberry ruit
Clarifai: fruit sweet food strawberry ripe juicy berry healthy isolated delicious
Imagga: plate, pan, wok, china, porcelain, food, dinner, cooking utensil, utensil, delicious
Wolfram ImageIdentify: cooking pan
CamFind: gray steel frying pan
Clarifai: ball nobody pan cutout kitchenware north america tableware competition bowl glass
Orbeus: frying pan
AlchemyAPI: (No tags)
Imagga: black, symbol, business, food, design, pattern, sign, art, traditional
Wolfram ImageIdentify: store
CamFind: black crocs
Clarifai: colour street people color car mall road fair architecture hotel
MetaMind: Shoe Shop, Shoe Store
Orbeus: shoe shop
The fun part aside, we are quite interested to see soon a more comprehensive subjective and objective evaluation of all these services, including Imagga, with their pros and cons, on more representative and rich datasets, and depending on the way the tags will be used in different verticals and applications.
Competition is an important driver for every industry, so we are more than open to participate in such kinds of service comparisons and may even initiate such a comparison in the very near future.
by Chris Georgiev
Machine learning is getting lots of attention lately. It’s amazing that some 200 people showed up at Hack Bulgaria event and stayed almost 3 hours to learn more about machine learning. Not to mention it was friday and the venue was not in the center of the city! It was a clear indication for us that lots of developers are getting curious about machine learning (ML) and that’s totally cool for companies like ours.
This is a short overview of our not so tech presentation about machine learning for images. Some of the other lecturers have covered different aspects of ML and it’s application in various cases and industries.
From the moment of their invention the convolution networks were great for tasks as face detection and handwriting. Thanks to the advance of the GPU technology and the extended base of image data, the convolutional networks demonstrated far better results for complicated tasks such as visual classification of objects in images.
There are some specifics when it comes to image recognition using machine learning. Images are a matrix of pixels (raster data) and that’s why recognition is sensitive to lighting, contrast, saturation, blur, noise, geometric transformations (scaling, translation, rotation) and occlusion.
Conventional image recognition methods struggle to find the optimal set of filters (convolutions) to apply for each specific use-case.
There are multiple levels and scales of interest, from low-level features such as texture to high-level features such as composition. On the top of that there’s a need for data-augmentation to compensate for sensitivity (e.g. training with blurred, cropped, scaled, noised versions of the images)
In order to do proper image analysis you will need huge (both deep and wide) architecture which requires massive amount of memory and processing power, made more accessible today via GPU empowered machines. It still takes a lot of time (up to 10 days) to train some large architectures.
There are a few implementations for convolutional neural networks.
We will be doing a series of articles in our blog on how image recognition is changing the paradigm and will allow image intensive business to finally better understand and monetize their image contents.
We are excited to announce some changes to our API pricing policy. We’ve got lots of feedback and requests for more affordable ways to access our APIs.
Today, we are announcing Developer Plan for Imagga APIs, priced at $14/month that will allow the use of one of our APIs with up to 12 000 calls a month (3000/day, 2 requests/second). We believe this plan will bring on the table flexibility and the opportunity to apply our breakthrough technology on a more affordable price.
Hacker plan remains free but we are reducing the monthly calls to 2000 (200/day, 1 request per second) and will be available as before just for image tagging API.
We are eager to see you how gonna apply our technology in your projects! Send us feedback and any ideas you have regarding our technology offering in general or any tip you want to share.
We are super excited to announce our partnership with AYLIEN - a natural language processing platform, that will make possible to add text analytics capabilities to our image recognition and analytics APIs. We believe this partnership will help users of both services better understand their multimedia content and do way more with it.
AYLIEN Text API is a package consisting of eight different Natural Language Processing, Information Retrieval and Machine Learning APIs that help developers extract meaning and insight from text documents. It can be applied in Ad-Targeting, Media Monitoring and Social Listening projects.
Imagga’s Image Recognition API utilizes machine learning, image recognition and deep learning algorithms to identify over 6,000 distinct objects and concepts and return relevant keywords that best describe what’s in the images.
Currently Imagga’s image analysis endpoint is being added to ALYIEN’s Natural Language Processing API, giving developers the ability to analyze text and images in one API.
Having access to two powerful technologies in a single API creates endless opportunities for businesses that need to deal with large volumes of user generated content. Users rarely input or share just text or images, so being able to analyze and understand both at once, gives amazing new opportunity for any business to distribute and monetize content.
Together with AYLIEN we’ve been testing how our technologies can compliment each other for some time and the results were very exciting. Text and images are different but complement well each other in many cases and applications.
You can try the new hybrid image and text analysis service here. There’s nice demo to play with before you are finally sold (you can see the results for some sample images, but also can upload your own)
2014 was quite exciting and challenging for Imagga. One of the most important things that have happened is the significant improvement of out tagging technology. We’ve trained and learned to recognize new objects so the tags the tech returns are more relevant than ever. This wouldn't be possible without the committed efforts of our machine learning researchers and software engineers. We grew in numbers as well. We’ve also got a new website and better business offering - see our current pricing plans.
What would be the year without great hacker events. We’ve attended and partnered quite alot - Photo Hack Day NYC, Seedhack Lifelogging London, LDV Vision New York, Photo Hack Day Japan, Telerik Hackathon. It’s always nice to meet excited developers eager to get their hands dirty on our APIs.
The end of the year got us a nice surprise - awesome reward from Trento ITC Labs. Besides the cash, we are excited to be able to leverage on their research and business network and spend couple of weeks in Berlin and London.
What 2015 have in store?
We are getting ready for an exciting and quite intensive 2015.
If you haven’t tried our APIs, sign up, our hacker plan is free forever.
Photo hack days are great fun and we are always surprised by the creativity and sometimes crazy ideas hackers have. Even though we were not physically present this time.
The runner up for Imagga API award was team Apollo, named after NASA’s Project Apollo, which gave us the first images of man walking on the moon.
"Our team began with the premise that mainstream photo sharing is characteristically narcissistic. As photo sharers, we often tie our own self-worth to attributions of our digital self. If we could make photo sharing altruistic, there's a chance that an entirely new set of content would be created", shares Joe, one of the guys behind the idea.
On AirBnB, public attraction and school websites we see polished images that sometimes do not correspondent with the current conditions of the place. Quite disappointing to head to a well deserved vacation just to discover your hotel hasn’t been renovated the last decade and the photos based on which you booked are heavily photoshopped.
The solution allows users to request a photo from other users that are currently present in a specific location. All users in that location would then receive a notification requesting a photo, and all of the photos for a given location are respectively tagged and stored in an image database.
Photos are curated by location (utilizing FourSquare's places API) using both user requested photos and those from existing content sources (e.g. ShutterStock, Behance, FourSquare). Imagga image recognition API was used for photo tagging and Twilio was implemented to make a photo request via SMS.
As a hypothetical use case, think of a prospective college student that wants to obtain a more realistic view into dorm life at the universities they’re considering. The student could request photos from dorm room locations at their target schools and current students share a photo of their room. It can also be used for inspirational purposes - say your are organizing a party at a hall and want to see how other organizers used the space and to get ideas for your own party.
"Imagga’s image recognition API quickly became a huge asset to us as we built out Apollo, enabling our team to intuitively apply relevant tags to the photos that comprised our image database.”
You can find more detail about Apollo App on Challenge Post.
We are really looking forward to see how the idea unfolds and becomes a real project.
There are thousands of ideas that Imagga API can help you make reality. Sign up for Imagga API and start hacking for free (or send us a request for special startup discount for bigger API volume)
It’s already a tradition to take part as partner at Photo Hack Day, organized by Aviary. Photo Hack Day this time is in New York, and is organized for 4rd time (Imagga partnered the second and third editions of the event in Facebook HQ, Melno Park and in Tokyo, Japan). The hackathon brings together talented developers and designers to build amazing image applications using web and mobile photo APIs. This edition is sponsored by Shutterstock, with API partners like Walgreens, Astra by Photoshelter, Foursquare, Filepicker, Twilio, Behance, Shutterstock, Aviary and of course Imagga.
We are giving free usage of our APIs for all participants during the hackathon, 6 months free of our Pro plan (worth $2,094) for the best use of our API & 6 months free of our Indie plan (worth $474) for all teams that use our API.
Here are some good idea how our API can be of help:
It’s awesome to take part in photo hack meetups. Will keep you posted about the results and the great implementations as result of the hack day.
After the exciting news of Telerik being acquired for over $260M last week it feels extremely exciting to report Imagga's API partnership at Telerik Hackathon. Being part of developer events like that reminds us of why we actually do what we do. Feels rewarding.
Imagga APIs help developers to quickly master any image data and extract meaning with the speed of light. Uses are numerous - you can extract color information and implement color search, classify images in predefined set of categories, extract keyword tags out of not annotated photos, even get the system learn from user feedback and improve the recall.
Georgi presented Imagga API's during the opening of the hackathon. Over 200 eager developers got excited about the awesome ideas they were about to build during the weekend. Three of the formed teams requested access to the API and it was our pleasure to walk the teams through the APIs.
Being part of Telerik Hackathon perfectly aligns with what we at Imagga intend to do. Reaching out to the dev community is important part of our strategy to make image recognition mainstream and showcase how Imagga technology can be used to solve real world problems and help harness big amounts of visual data that bombards us on a daily basis.
Super happy to announce our magical Auto Tagging API has been integrated into the Blitline image processing platform! Besides smart cropping, now Blitline offers to its customers the ability to add meaning to the images in a fast and convenient manner.
Integration is quite easy if you are already familiar with Blitline. You will be surprised how good results are, at least this is what initial users of the service state! It’s Imagga’s machine learning core that does the job, not some mechanical turk guys reviewing and manually annotating your images. So your privacy is kept and you are getting the job done faster than any army of keywording specialists can do.
Some of the tags that Imagga’s Auto-tagging API returns are far from perfect, but we are constantly working to improve the algorithms and broaden what we know about the world represented in images. The system is currently trained for the most common types of objects and concepts and we add more constantly. If it’s not working well with your set of images, we can train the system for your specific needs, simply contact us here and we’ll be happy to help.
If curious what the Imagga Auto-tagging API can do for you and your untagged photos, give it a try here and share your feedback with us.
Dealing with images is still complicated and time consuming process. You never take the time to organize that vacation photos or add tags to your growing image collection. We can do that automatically for you thanks to our auto-tagging technology. There are two ways if we want to do it really fast – to get more powerful machines or to speed up the algorithms responsible for this. Actually we did both, but lets pay some attention to our improved image tagging algorithms. Tagging is faster than ever!
GPU comes to help image recognition and thanks to it we are able to do things impossible (or ridiculously expensive) several years ago. We just ported all components of our tagging to take advantage of GPU acceleration and the result is about 5 times faster tagging! GPU is a bliss for image intensive technologies as ours. Actually the biggest portion of the time we need for tagging an image now is used for retrieving the image to be tagged.
With significantly faster tagging API now we can handle even larger image volumes and this can be done simultaneously for hundreds of users. Real time tagging is made very attractive and we are already testing with several clients.
Intrigued how fast the tagging is now? Why don’t you sing up now and test it yourself!