Imagga Appointed One of the Eight Digital Champions of the UN’s Award for Digital Innovation With Social Impact
Posted on July 21, 2016
We are happy to announce Imagga was awarded a very special prize – being one of the 8 global champions of the World Summit Awards. Imagga is overall winner in the Media & News category.
The 8 WSA Global Champions are excellent example of how digital innovation can make a true social impact and solve important issues on both local and global scale. WSA acknowledges pioneering innovation in the field of digital content and aims to bring visibility to projects that can create sustainable social change and impact world-wide.
During the 3 days Congress in Singapore, the selected best of digital start-ups, young entrepreneurs and distinguished experts from around the globe met to learn from each other. The congress speakers shared outstanding examples and insights of how digital technologies drive the United Nations agenda for the Sustainable Development Goals (UN SDG’s).
On-site Grand Jury of 40 international ICT experts at the WSA Social Innovation Congress carefully selected the 8 WSA Global Champions.
It was super exciting to take part at the congress and meet innovators from all over the world. You rarely have the chance to spend quality time with group so diverse – UN and government officials, locals and expats, young and bright minds involved in amazing community and tech projects.
Chris was on stage the second day to showcase Imagga and how we are changing the way people handle digital images. It was awesome to see both jury and people in the room excited about the practical applications of Imagga’s image recognition technology. It’s always good indicator of your stage performance when people have questions, share ideas how you can improve it or just want to meet and talk to you.
The whole WSA experience was amazing. Besides weather being too humid and hot for the European taste, having an opening party by the hotel pool with amazing view to the city, listening to world class speakers during the congress, being served the best food Singapore can offer, and having the best and friendliest event organizers, we can boldly state WSA in Singapore was the coolest event we’ve attended.
Asia is a great market for technologies such as ours. Being there for such short time validated what we already knew – we need to spend more time and efforts to help companies from the region deal with and monetize better their digital content. We are in love with Asia and coming soon to do more business there.
The internet provides amazing opportunity to connect with each other and share information. But like every great invention, it has a dark side. Explicit content is lurking around every corner and it’s not uncommon to stumble upon it while innocently browsing the web. We call that content Not Safe For Work (NSFW) – not safe for minors and unappropriate for work.
We have been working hard to offer an excellent solution for detecting adult content and providing extremely effective API for distinguish between photos that are safe for work (no nudity), semi-safe (underwear and swimwear) and totally safe (no nudity whatsoever).
The NSFW (not safe for work) categorizer can be of extreme help for almost any online business that deals with user generated photos, or aggregates such from third parties. Various countries have quite strict restrictions on what content can be publicly available to minors, so we can help them too comply with the requirements. Not to mention Apple’s App Store restrictive rules on nudity that have been observed severely and have caused problems for apps, especially in the dating vertical.
Up to now, user generated photo content has been moderated manually. This is a time consuming and expensive process. It also, has some privacy problems – your sensitive content might end up being checked by somebody that you personally know (quite embarrassing, the world is small and this happens more often than you think).
Our adult content moderation categorizer can automate this process, so you can get a lot more content done in no time. The technology is in beta stage, and there might be some flaws, but we are still able to cover you and your app!
Currently our NSFW (adult content image moderation) categoriser puts images in three categories:
- nsfw – not safe at all – expect p0rn images, nudes, body parts to be put into this category.
- underwear – medium safe images such as lingerie, underwear, pants
- safe – completely safe images without nudity
Here are a couple of uses cases that illustrate how our NSFW Categorizer can be used to speed up the process of content moderation:
Marketplaces – awesome for online shops/marketplaces where people upload products (along with some photos) and you provide the infrastructure. The NSFW filter can be moderately restrictive – underwear and swimwear photos can be included, but only nude photos stay out of the public pages.
Kids websites/communities – perfect candidate for aggressive filtering of adult content. Anything related to nudity can be filtered out. Imagga’s NSFW categorizer can be the first step of automated elimination of problematic adult content and then the rest can be manually moderated to make sure unapropriate photos are out of kids’ sight.
Dating websites – there are lots of issues here with adult content, especially when it comes to apps for iPhone. Dating sites employ lots of people to eliminate the problem. Sometimes you need to upload a new profile photo to impress somebody special and it’s very frustrating when moderation takes forever even though you uploaded just a facial photo.
There are probably tons of cases where the NSFW categorizer can be of very practical use. Why don’t you give it a try and share your impressions?
Posted on May 14, 2016
GPU Technology Conference organized by NVIDIA is one of the best events for the GPU related industries. Artificial Intelligence invades by storm all technology fields and becomes a frontrunner for innovation. This year GTC16 showcased that by focusing on deep learning, virtual reality and self-driving cars. Over 5000 engineers, scientists and entrepreneurs had the chance to hear from the best on the field and learn about the great advancements in terms of hardware and software. NVIDIA announced some really impressive hardware like the Tesla P100 and the NVIDIA DGX-1 (the world’s first deep-learning supercomputer powered by 8 cards that can speed up training times by over 12x). We are definitely going to consider it for our machine learning infrastructure in the foreseeable future.
Imagga was selected to present at the Machine Learning track about the practical use-cases for image classification and tagging. We’ve hand-picked a few particular and somehow diverse use-cases among the hundreds of companies and thousands of developers using the Imagga platform.
To give you a glance of the use cases we’ve used for our presentation, let’s mention a few:
Unsplash used Imagga auto-tagging to reduce manual tagging of their amazing free stock photo offering and to enhance the search capabilities.
KIA Motors for quete precise user profiling based on Imagga’s auto-tagging and color extraction.
Tavisca for building custom trainer to automated categorization of over 25M hotel related photos.
Seoul National University for generating custom classifier for waste pre-sorting.
More than just showcasing, the aim of our presentation was to inspire people, to give them an idea how image recognition technology can be a game changer for industries that traditionally have been dependant on manual curation of image content.
It was amazing to see the audience involved in the subject, following up with question about the presented cases, as well as their own potential cases. We had the chance to meet existing and potential partners at the conference and to extend our network.
The organization was great and everything went quite smoothly. Thanks NVIDIA for the opportunity to be on stage and talk about the amazing product we are building with love from Bulgaria. We are looking forward to the next edition.
Posted on April 24, 2016
Looks like 2016 will be the year of bots and almost all of them pretend to use some form of machine learning to do their job even better. Microsoft just released an image detection bot called CaptionBot. As all related to image recognition via machine learning, the bot does quite a good job detecting some concepts and quite a bad when it comes to others.
We always find it fun to compare image recognition services and see how Imagga performs with photos used by users. This blog post was provoked by a friend who posted this on Facebook:
It’s about an article on TNW about Microsoft CaptionBot written by Natt Garun and it’s failure to deliver consistent results. We love Microsoft and even use Azure and Microsoft’s Translation tools to improve the language support for tags, but it’s quite tempting to run the test and share.
We’ve run the same photos included in the article through Imagga API demo, and that’s what our image recognition thinks about it. It’s still not perfect! Have fun exploring the results.
The first photo is of a home metal hook:
We’ve nailed that one, seriously!
Obvious description of the photo will be something like Beautiful Asian girl siting in a vegie garden. We are quite close to it with tags like yard, grass, outdors, people. Well, we didn’t figure the rase properly, but that’s something we are cureenlty working on and lots of imporvements coming in couple of months.
There’s something about that photo that makes it difficult for the tech to figure out. We didn’t go exeptionally good job here. Well, at least we tagged with happy, witch is obvious from the shaning face of that Asian woman.
It’s obvious we do well with living creatures. Thanks good we do not need to figure out if they are male or female and how happy they are.
This one is quite dificult as it brings alot of contect, that the machine need to know about. We haven’t trained for movie characters, but tags are in general good. Well, we can argue about the guitar – if you have wild imagination you can definately picture Darth Vader with guitar. Human do not apply much in this case, and we are happy with the low confidence level 😉
Ever wondered where do people come up with ideas for testing? This one is classic. Definately no surfboard and yes, it’s an attractive fashionable male model 😉
Toy elephents may look like that in the forseeble future, but to us this looks like a man robot. Automation is all what robots are predestened for, right?
Can’t agree more that the person laying in the grass looks like an alligator. Tags like summer and grass bring a bit of balance to the results. If not accurate at least a bit closer then a cow laying down in a grassy field.
This one is quite though. What’s quite visible for a person to see, it’s not so visible for the machine, who’s lacking the context of the two mager objects on this photos.
It’s great idea to put bed next to any desk in the office. May be one next to the copy machine will not be that good idea – to much noice and traffic arround it.
We’ve tried to describe this picture and results are not too bad 😉
Last but not least, we do not see donuts but happy, male cople in love. Well, we’ve got a bit of spicy results with less confidence as wife and caucasian, but still…
Posted on April 14, 2016
We are super thrilled to be part of Food Hacks Berlin, organized by HackerStolz. The idea of the hackathon is to find digital ideas that can revolutionize the food industry. Food delivery process has seen lots of innovation in recent years, but it’s obvious there’s so much that can be done: better services related to what we consume daily, smart apps that automate the burden of manual calories intake and many more.
Over 100 participants from 20 countries hacked in Rainmaking Loft, one of the well known startup destinations in Berlin. It’s inspiring to see so many motivated hackers with various experience in so many industries. One of the reasons we love hacker events in Berlin is the amazing mix of talent and the diversity of the participants. There was a group of bright Ukrainians that travel by bus over 12 hours, people from Poland, Holland, UK, Switzerland to name a few.
Imagga was an API partner at the event. We’ve trained two special categorizers to help developers get the best out of our image recognition technology and to make sure they build amazing apps – meal classifier that can recognize over 300 cooked meals and food & veggies classifier.
Chris presented Imagga API at a dedicated workshop, talking about how you can use Imagga API to recgnize pictures of meals, fruites and veggies.
Amazing apps have been build during the weekend – from smart fridge that can find out what food is aging and needs to be thrown away and reordered to some quite creative ways to use food:
Neighborhood Cuisine – A mobile app that lets you get together with your neighbors to cook tasty recipes from your leftovers.
HelloFridge – the easiest way to save your ingredients from going bad by combining them to fit delicious recipes.
iTrash – the most intelligent trash can in the world
Surprise Meal – Provides surprise ideas for recipes.
Bulker – Early-morning bakery delivery that loves you
Super Quest – Food AR app for basic food education of children
Mint – A unique way to discover a new culture through music, art and food.
Vollkorntoast – New Shopping Experience
rcply – Search for recipes based on ingredients you already have at home
lekkerlekker – Your one stop shop for finding the best recipes and directly buy alls groceries you need.
VR-ify – Efficient food testing and prototyping process with VR and brain activity scanner
TraceYourFood – See the journey of the food you consume!
KeineWaste – KeineWaste connects food businesses and volunteers in real-time, converting food waste into food donations.
WTF: What the FOOD – So you’ll never run out of müsli again.
Our favorite hack is Food Sherlok, developed by Team Baker Street 😉 Food Sherlok lets you take a photo of any ingredient and gives your nutrition details, recipe suggestions and the possibility to buy it online. The mobile app utilizes Imagga API for the image recognition part, Spoonacular API for the nutrition details and HellowFresh API for the recipe suggestions and possible food delivery.
Imagga prize – GoPro camera, t-shirts and free API access went to Team Baker Street for their incredible work and valuable feedback.
“Getting an idea took some time, so the first hours we decided on an idea, but it took another hour until it was clear for me what I had to do and how it should look like. Uploading the image and resizing it on the frontend was also harder than I thought, just because some of the APIs expect form data for images.”
You can’t do much for the limited time of the hackathon, but the team is already making some bold plans for Food Sherlock. They think it will be quite useful if they can generate location based sate for retailers, work on improving the image recognition service with data specific for different geographic locations, experiment with new ways to visualize the results (may be adding something like search heat map overlay) and offer text to speech recognition to make sure refugees can also use the app.
Posted on February 17, 2016
Hipster bar is where only hipsters are allowed! How do you reinforce that? With a physical doorman who’s job is to ruthlessly send back guys without beards, or, in the case of the Max Dovey’s project – using Imagga’s image recognition technology. The hipster bar was open to the public for the duration of WdW Festival 2015.
Let’s get into the details of this quite unique usage of Imagga’s powerful image recognition technology. To enter the bar, you need to stand in front of camera that snaps a photo of you and then sends it to Imagga servers. Then the tech analyses your look and as result returns how certain the system is you are a hipster.
If you are found over 90% hipster, the door of the bar will open and you can join great company of people that are hipster enough.
Hipster is quite loose term and usually is used to describe a subculture of people who attempt to keep up to date with the latest trend and remain ‘hip’. These are men and women in their 20’s and 30’s that value progressive politics and independent thinking, and often have appetite for art and indie-rock & counterculture. Of course being hipster includes certain look – thick rimmed glasses, tight-fitting jeans, old-school sneakers, side-swept bangs and beards (men only).
Max Dovey, an artist from Rotterdam, who initiated the project, sourced thousands of images of hipsters to be used by our team to build a special hipster deep learning mode. The specific classifier was able to easily distinguish between snaps of hipsters and all the rest. Here’s how it actually worked:
Have another crazy idea? Don’t hesitate to try it out – with our custom training only the sky is the limit… if you are hip enough 😉
Weapons are controversial topic, especially in zones of conflict. A lot of photos of people holding weapons are posted daily in social media. Analyzing the concentration and the motives for posting this kind of photos can eventually hint to a trend or some potential problems.
In a recent post Justin Seitz showcases a simple hack for discovering photos of weapons on social media using Imagga’s image recognition API. Not all photos of people with weapons are being detected by our general tagging API, but there’s a simple explanation behind that – the API hasn’t been specifically trained for that task. Still, it’s quite amazing we are detecting various types of weapons in different context.
Applications of this are numerous: detecting concentration of weapon related photos and possible militarization in certain area, unauthorized/hazardous use of weapons, including by children to name a few.
Justin has also managed to improve the results by cropping some of the images in advance, so they are more concentrated on the eventual weapon. In the upcoming updates of our API this will be solved even better – we are releasing positional tagging of objects soon and besides having the actual position this will also improve the object-level recognition, stay tuned!
In addition to the very interesting use-case Justin’s article is a great practical guide on how to use the Imagga’s auto-tagging API so make sure to check it out.
Posted on January 4, 2016
2015 is over and it’s time to do a recap of the year and remember the good and the bad. It was a joyful and at the same time a challenging year, but there is one word that resonates in everyone from the Imagga family – GROWTH.
#1 Self-service Cloud API Growth
It’s been amazing to see our self-service cloud API customer base grow significantly. Lots of exciting practical applications were developed using Imagga API, but we also saw some really cool and funny apps developed with our tagging technology – application that recognizes hipsters and only lets them into a bar, automated sorting of hotel images, powerful user profiling to empower smart car selection process and many, many more.
#2 Big Enterprise Customers in US, EU and Asia
It’s a different sales channel but the same amazing image recognition technology that helped our enterprise clients better serve their customers. Can’t be more grateful for the trust and the opportunity to work and learn together.
#3 Great Image Recognition Technology Advancement
We worked hard to make the automated image tagging technology more precise and reliable – now we can recognize more objects and concepts and are quite good at it. Scaling up the API to virtually any load that needs to be handled was a major milestone to make sure we can meet our enterprise customers’ demands. Video tagging is in beta and we are excited to mature it with the help of couple of committed customers in the upcoming months. Offering custom training for automated image categorization was another major milestone that we nailed in 2015. Recently we added the first versions of our multi-language support.
#4 Team & Advisors
We’ve put lots of efforts into building up the AI knowledge and motivation of the team. It’s been awesome to add the first female member and we’ll definitely grow that number next year. Team building retreat in the mountains and playing bowling regularly was great fun. LAUNCHub, our current investor and Vassil Terziev, angel investor and founder of Telerik were vivid supporters and precious advisors on product and business strategy decisions.
#5 Events, Hackathons, Startup Competitions
Being part of the startup community locally in Sofia, as part of the greater European startup family, is awesome experience. This summer we started Machine Learning meetups in Sofia and plan to grow the event in the next year. We’ve also supported multiple hackathons by providing cool prizes and access to the Imagga image recognition API.
We exhibited at the NVIDIA’s GPU Tech conference in April and met lots of partners and customers there. We are selected to present in the 2016 edition of the same conference in April and will be happy to meet there and tell you about the exciting business applications of machine learning for image recognition and some of the technical challenges we’ve solved along the way.
Winning South Summit’s award for best startup in the Technology for Big Players category in October was a great recognition for us. Our CEO Georgi had the honour to get the award personally from HM Felipe VI, King of Spain.
Imagga has also been selected as one of the winners of the Balkan Venture Forum in Belgrade, Serbia & The World Summit Award by UN.
In the beginning of December we were selected as one of the 7 companies to take part into inaugural growth program of Google Campus Warsaw – valuable experience with lots of new contacts from the European VC and startup community.
#6 Partnerships & Networking
Big part of our strategy to spread the word about the awesome image recognition technology we’ve developed and make it even more accessible to developers all over the world is through partnerships. We’ve added three more during 2015 – Automatic image categorization and tagging with Imagga, AYLIEN and OntoText S4 and are quite excited to help them serve better their clients by adding advanced image categorization and tagging to the range of services they offer.
We’ve also become part of EIT ICT Future Cloud family of companies and this opens up lots of new opportunities for business development in Europe and the US.
Somebody has said that the best investors in your company are actually your customers. It’s amazing to see so many developers, businesses and enterprises trust Imagga’s API and use it to do things unimaginable before. They help us to reach break-even and continue to grow the company organically.
We are ready to jump into 2016 with lots of things in the pipeline:
- Major update of the technology – in terms of both precision and capabilities. Stay tuned!
- Improvements of our web platform, including our API documentation, tutorials and onboarding tools
- Discovering new verticals and helping even more developers and business to make great use of our image recognition API.
If you haven’t tried our APIs yet, make sure to sign up for Imagga
Posted on December 15, 2015
For far too long image understanding has been considered too complex for the machines to deal with. It takes years of training for the human brain to build links between the visible and connect it to concepts of shapes, colors and objects. Even though neural networks were invented couple of decades ago and were considered huge step into machine AI, what lacked was computing power. With the advance of GPU computing, new opportunities were discovered, algorithms were reinvented so machine and deep learning are back on the table.
The machines are powerful enough now to grasp the world almost as good as a 3 years old kid. A prerequisite for neural network to work well is a clear, representative data that will make the outcome results more precise and accurate. Huge efforts to collect and classify the images of the world were undertaken in the last couple of years. Are the machines ready for a battle then?
At Imagga we take that challenge seriously by building an intelligent image recognition technology that can teach the machine to understand basic daily life objects, comprehend concepts and eventually deal with complex pictures, where lots of background information needs to be taken into account in order to be interpreted properly. It’s challenging task but we love what we do.
With that stated, we are ready to set the stage for an epic battle, the battle of the century – machines vs humans. To some it might sound funny, unrealistic, pretentious, but it’s coming. At least now in a form of a cool game, done with love by Imagga and Algolia.
We’ve called it Human vs. Robot: Chash Of The Image Tags. You will be taking central role of judging who tags better – the human or the machine. You will be presented two sets of images for a given text tag and need to vote for the set that better represents the concept of the text tag. As every good judges you will need to be unbiased and make up your mind only on the facts, so you will not know which set was tagged by humans or respectively by machines. You will get five rounds to decide and pronounce a winner. Of course you can play as many times as you wish, and even invite your friends to try it out and have fun.
The game is made possible by the joined efforts of Algolia and Imagga. Algolia is building powerful search technology for exploration of large data sets. Algolia’s hosted search API delivers instant and relevant results as you type your search query. Imagga’s part is to provide the automated machine tagging of all the images you will be seeing in the search results.
It might be just a game, but the real idea is to demonstrate how powerful machine recognition is nowadays. It can really replace or at least greatly assist people in the process of tagging photos – it’s much faster, more cost effective, most of the time – more consistent and even more precise than human tagging. This empowers a lot of use-cases in stock photography, digital asset management, advertising, cloud storage and photo sharing that are otherwise not feasible or even not possible with human tagging.
Why don’t you play and judge for yourself Clash of the image tags!
Posted on November 17, 2015
This blog post is part of series on How-Tos for those of you who are not quite experienced and need a bit of help to set up and use properly our powerful image recognition APIs.
In this one we will help you to batch process (using our Tagging or Color extraction API) a whole folder of photos, that reside on your local computer. To make that possible we’ve written a short script in the programming language Python: https://bitbucket.org/snippets/imaggateam/LL6dd
Feel free to reuse or modify it. Here’s a short explanation what it does. The script requires the Python package, which you can install using this guide.
It uses requests’ HTTPBasicAuth to initialize a Basic authentication used in Imagga’s API from a given API_KEY and API_SECRET which you have to manually set in the first lines of the script.
There are three main functions in the script – upload_image, tag_image, extract_colors.
- upload_image(image_path) – uploads your file to our API using the content endpoint, the argument image_path is the path to the file in your local file system. The function returns the content id associated with the image.
- tag_image(image, content_id=False, verbose=False, language=’en’) – the function tags a given image using Imagga’s Tagging API. You can provide an image url or a content id (from upload_image) to the ‘image’ argument but you will also have to set content_id=True. By setting the verbose argument to True, the returned tags will also contain their origin (whether it is coming from machine learning recognition or from additional analysis). The last parameter is ‘language’ if you want your output tags to be translated in one of Imagga’s supported 50 (+1) languages. You can find the supported languages from here – http://docs.imagga.com/#auto-tagging
- extract_colors(image, content_id=False) – using this function you can extract colors from your image using our Color Extraction API. Just like the tag_image function, you can provide an image URL or a content id (by also setting content_id argument to True).
Note: You need to install the Python package requests in order to use the script. You can find installation notes here.
You have to manually set the API_KEY and API_SECRET variables found in the first lines of the script by replacing YOUR_API_KEY and YOUR_API_SECRET with your API key and secret.
Usage (in your terminal or CMD):
python tag_images.py <input_folder> <output_folder> –language=<language> –verbose=<verbose> –merged-output=<merged_output> –include-colors=<include_colors>
The script has two required – <input_folder>, <output_folder> and four optional arguments – <language>, <verbose>, <merged_output>, <include_colors>.
- <input_folder> – required, the input folder containing the images you would like to tag.
- <output_folder> – required, the output folder where the tagging JSON response will be saved.
- <language> – optional, default: en, the output tags will be translated in the given language (a list of supported languages can be found here: http://docs.imagga.com/#auto-tagging)
- <verbose> – optional, default: False, if True the output tags will contain an origin key (whether it is coming from machine learning recognition or from additional analysis)
- <include_colors> – optional, default: False, if True the output will also contain color extraction results for each image.
- <merged_output> – optional, default: False, if True the output will be merged in a JSON single file, otherwise – separate JSON files for each image.