San Jose, CA. May 10, 2017. Imagga, a premiere PaaS provider of visual recognition now offers its image intelligence solution for deployment on private, local servers. Companies whose data cannot be shared in the cloud for various reasons can now fully benefit from Imagga’s award-winning content recognition technology.
With the introduction of its on-premise solution, Imagga breaks a new innovation barrier. Companies can now have access to best of breed deep-learning content recognition technology without ceding confidentiality of their data. While Imagga’s cloud-based solution already offers extremely high level of security and privacy, its on-premise solution goes a step further, for when data just cannot leave local servers.
“Our on-premise offering is the ultimate data control solution for companies like private cloud, law enforcement, medical, legal, telecom and highly sensitive corporate content. Installed on a local server, Imagga’s visual content recognition can process billions of images, all without leaving the company’s premises, “ says CEO and co-founder Georgi Kadrev. ”It delivers the highest degree of data security and privacy. No other image A.I. company in the world offers this level of control and security.”
Swisscom, an advanced user of the Imagga’s on-premise solution, regularly processes huge volume of images on their own servers: “Imagga impressed us with the quality of the recognition technology, recognizing both objects and broader scenic categories,” says Andreas Breitenmoser, Project Manager at Swisscom A.G.
Imagga’s on-premise solution can be deployed quickly and seamlessly on any servers with the appropriate technical requirements. The company has been working closely with NVIDIA, the leader in GPU high-performance computing, to optimize all its software to run smoothly and efficiently on NVIDIA GPU servers with Tesla and Pascal architectures.
Highly scalable, the on-premise solution delivers the same level of accuracy as the existing cloud offering, currently deployed by over 9,000 developers worldwide and implemented by companies like Cloudinary, Unsplash, Tavisca, KIA Motors, and Plex TV to process several billion photos and videos in the last year alone.
Imagga (https://www.imagga.com/) is an Image Recognition Platform-as-a-Service providing Image Tagging APIs for developers & businesses to build scalable, image and video intensive apps. The company has been recognized by IDC as one of the top innovators in its space for the year 2016. Built for scalability and easily deployed it offers auto-tagging, auto-categorization, smart-cropping, color analysis, content monitoring as well as custom classification.
For more information, please contact:
+1 917 304 3875
Imagga logo - https://goo.gl/hgqBkE
Georgi Kadrev, CEO photo - https://goo.gl/BCgj43
by Chris Georgiev
Imagga is recognized as one of the 3 pioneering players in the worldwide image analytics market. IDC’s 2016 Innovators report acknowledges companies that offer an inventive technology and/or groundbreaking new business model.
Imagga stands out with the possibility to offer custom image recognition training using custom-provided data for training sets, according to the prestigious report. Thanks to the flexible training model, customers are offered unprecedented opportunity to make sense of their image content and use the insights for analytics, understanding customers or better monetization strategies. Depending on the complexity of the training model, it takes form a day to couple of days for the actual training. Customers are given visual tools to evaluate the results and decide if fine-tinning is needed for greater performance.
According to Carrie Solinger, senior research analyst at Cognitive Systems and Content Analytics “Application of natural language processing and machine learning technologies have advanced image analytics’ cost effectiveness and accuracy, exponentially”. Services as Imagga enable business to harness the power of machine learning and do what once was done manually with great expense of manpower, or was impossible due to time restrictions. Real time (or near real time) image analytics opens up totally new horizon for companies to optimize their business decisions with direct effect on productivity and business results.
You can download this report from IDC here.
We are happy to announce Imagga was awarded a very special prize - being one of the 8 global champions of the World Summit Awards. Imagga is overall winner in the Media & News category.
The 8 WSA Global Champions are excellent example of how digital innovation can make a true social impact and solve important issues on both local and global scale. WSA acknowledges pioneering innovation in the field of digital content and aims to bring visibility to projects that can create sustainable social change and impact world-wide.
During the 3 days Congress in Singapore, the selected best of digital start-ups, young entrepreneurs and distinguished experts from around the globe met to learn from each other. The congress speakers shared outstanding examples and insights of how digital technologies drive the United Nations agenda for the Sustainable Development Goals (UN SDG’s).
On-site Grand Jury of 40 international ICT experts at the WSA Social Innovation Congress carefully selected the 8 WSA Global Champions.
It was super exciting to take part at the congress and meet innovators from all over the world. You rarely have the chance to spend quality time with group so diverse - UN and government officials, locals and expats, young and bright minds involved in amazing community and tech projects.
Chris was on stage the second day to showcase Imagga and how we are changing the way people handle digital images. It was awesome to see both jury and people in the room excited about the practical applications of Imagga’s image recognition technology. It’s always good indicator of your stage performance when people have questions, share ideas how you can improve it or just want to meet and talk to you.
The whole WSA experience was amazing. Besides weather being too humid and hot for the European taste, having an opening party by the hotel pool with amazing view to the city, listening to world class speakers during the congress, being served the best food Singapore can offer, and having the best and friendliest event organizers, we can boldly state WSA in Singapore was the coolest event we’ve attended.
Asia is a great market for technologies such as ours. Being there for such short time validated what we already knew - we need to spend more time and efforts to help companies from the region deal with and monetize better their digital content. We are in love with Asia and coming soon to do more business there.
The internet provides amazing opportunity to connect with each other and share information. But like every great invention, it has a dark side. Explicit content is lurking around every corner and it’s not uncommon to stumble upon it while innocently browsing the web. We call that content Not Safe For Work (NSFW) - not safe for minors and unappropriate for work.
We have been working hard to offer an excellent solution for detecting adult content and providing extremely effective API for distinguish between photos that are safe for work (no nudity), semi-safe (underwear and swimwear) and totally safe (no nudity whatsoever).
The NSFW (not safe for work) categorizer can be of extreme help for almost any online business that deals with user generated photos, or aggregates such from third parties. Various countries have quite strict restrictions on what content can be publicly available to minors, so we can help them too comply with the requirements. Not to mention Apple’s App Store restrictive rules on nudity that have been observed severely and have caused problems for apps, especially in the dating vertical.
Up to now, user generated photo content has been moderated manually. This is a time consuming and expensive process. It also, has some privacy problems - your sensitive content might end up being checked by somebody that you personally know (quite embarrassing, the world is small and this happens more often than you think).
Our adult content moderation categorizer can automate this process, so you can get a lot more content done in no time. The technology is in beta stage, and there might be some flaws, but we are still able to cover you and your app!
Currently our NSFW (adult content image moderation) categoriser puts images in three categories:
Here are a couple of uses cases that illustrate how our NSFW Categorizer can be used to speed up the process of content moderation:
Marketplaces - awesome for online shops/marketplaces where people upload products (along with some photos) and you provide the infrastructure. The NSFW filter can be moderately restrictive - underwear and swimwear photos can be included, but only nude photos stay out of the public pages.
Kids websites/communities - perfect candidate for aggressive filtering of adult content. Anything related to nudity can be filtered out. Imagga’s NSFW categorizer can be the first step of automated elimination of problematic adult content and then the rest can be manually moderated to make sure unapropriate photos are out of kids’ sight.
Dating websites - there are lots of issues here with adult content, especially when it comes to apps for iPhone. Dating sites employ lots of people to eliminate the problem. Sometimes you need to upload a new profile photo to impress somebody special and it’s very frustrating when moderation takes forever even though you uploaded just a facial photo.
There are probably tons of cases where the NSFW categorizer can be of very practical use. Why don’t you give it a try and share your impressions?
Hackathons are part of Imagga’s DNA - we love to hack and we definitely like to win. Porsche InnovationEngine hackathon took place in Salzburg, Austria on 8-10 June 2016. 12 teams all across Europe participated. We are happy to announce we won 1st prize but here’s a short report about this great event.
There were three tracks for the teams to complete - car recognition, financial optimization, the best car for me. It’s quite obvious we’ve decided to participate in the car recognition challenge.
Every great hackathon should start with nice dinner, and that’s what happened Friday night. As you can imagine we’ve been busy drafting the hack idea, meeting mentors and doing the actual hack. Literary we used the time and available napkins (great startup weapon to remember awesome ideas) to craft our winning strategy.
No surprise the next day we’ve digged straight into building our brilliant idea. Some mentors interrupted the process, but their insight on the car market was quite valuable. We’ve worked hard till midnight as we wanted to impress the jury with fully functional app including custom classification that was trained overnight.
During the final day of the hackathon we presented in front of 40 managers and 3 C-level executives (incl. the CEO) of Porsche Holding Group. Our demo went really smooth and all were impressed with the technology and that we were actually able to put together fully functional demo in such a short time.
Getting the first prize was such an honor and also an opportunity to work in that space - car discovery & search. Not to forget one of the perks winning Porsche hackathon - Audi Driving Experience! Looking forward to it!
GPU Technology Conference organized by NVIDIA is one of the best events for the GPU related industries. Artificial Intelligence invades by storm all technology fields and becomes a frontrunner for innovation. This year GTC16 showcased that by focusing on deep learning, virtual reality and self-driving cars. Over 5000 engineers, scientists and entrepreneurs had the chance to hear from the best on the field and learn about the great advancements in terms of hardware and software. NVIDIA announced some really impressive hardware like the Tesla P100 and the NVIDIA DGX-1 (the world’s first deep-learning supercomputer powered by 8 cards that can speed up training times by over 12x). We are definitely going to consider it for our machine learning infrastructure in the foreseeable future.
Imagga was selected to present at the Machine Learning track about the practical use-cases for image classification and tagging. We’ve hand-picked a few particular and somehow diverse use-cases among the hundreds of companies and thousands of developers using the Imagga platform.
To give you a glance of the use cases we’ve used for our presentation, let's mention a few:
Unsplash used Imagga auto-tagging to reduce manual tagging of their amazing free stock photo offering and to enhance the search capabilities.
KIA Motors for quete precise user profiling based on Imagga’s auto-tagging and color extraction.
Tavisca for building custom trainer to automated categorization of over 25M hotel related photos.
Seoul National University for generating custom classifier for waste pre-sorting.
More than just showcasing, the aim of our presentation was to inspire people, to give them an idea how image recognition technology can be a game changer for industries that traditionally have been dependant on manual curation of image content.
It was amazing to see the audience involved in the subject, following up with question about the presented cases, as well as their own potential cases. We had the chance to meet existing and potential partners at the conference and to extend our network.
The organization was great and everything went quite smoothly. Thanks NVIDIA for the opportunity to be on stage and talk about the amazing product we are building with love from Bulgaria. We are looking forward to the next edition.
Looks like 2016 will be the year of bots and almost all of them pretend to use some form of machine learning to do their job even better. Microsoft just released an image detection bot called CaptionBot. As all related to image recognition via machine learning, the bot does quite a good job detecting some concepts and quite a bad when it comes to others.
We always find it fun to compare image recognition services and see how Imagga performs with photos used by users. This blog post was provoked by a friend who posted this on Facebook:
It’s about an article on TNW about Microsoft CaptionBot written by Natt Garun and it’s failure to deliver consistent results. We love Microsoft and even use Azure and Microsoft's Translation tools to improve the language support for tags, but it’s quite tempting to run the test and share.
We’ve run the same photos included in the article through Imagga API demo, and that’s what our image recognition thinks about it. It’s still not perfect! Have fun exploring the results.
The first photo is of a home metal hook:
We've nailed that one, seriously!
Obvious description of the photo will be something like Beautiful Asian girl siting in a vegie garden. We are quite close to it with tags like yard, grass, outdors, people. Well, we didn’t figure the rase properly, but that’s something we are cureenlty working on and lots of imporvements coming in couple of months.
There’s something about that photo that makes it difficult for the tech to figure out. We didn’t go exeptionally good job here. Well, at least we tagged with happy, witch is obvious from the shaning face of that Asian woman.
It’s obvious we do well with living creatures. Thanks good we do not need to figure out if they are male or female and how happy they are.
This one is quite dificult as it brings alot of contect, that the machine need to know about. We haven’t trained for movie characters, but tags are in general good. Well, we can argue about the guitar - if you have wild imagination you can definately picture Darth Vader with guitar. Human do not apply much in this case, and we are happy with the low confidence level ;-)
Ever wondered where do people come up with ideas for testing? This one is classic. Definately no surfboard and yes, it’s an attractive fashionable male model ;-)
Toy elephents may look like that in the forseeble future, but to us this looks like a man robot. Automation is all what robots are predestened for, right?
Can’t agree more that the person laying in the grass looks like an alligator. Tags like summer and grass bring a bit of balance to the results. If not accurate at least a bit closer then a cow laying down in a grassy field.
This one is quite though. What’s quite visible for a person to see, it’s not so visible for the machine, who’s lacking the context of the two mager objects on this photos.
It’s great idea to put bed next to any desk in the office. May be one next to the copy machine will not be that good idea - to much noice and traffic arround it.
We’ve tried to describe this picture and results are not too bad ;-)
Last but not least, we do not see donuts but happy, male cople in love. Well, we’ve got a bit of spicy results with less confidence as wife and caucasian, but still…
We are super thrilled to be part of Food Hacks Berlin, organized by HackerStolz. The idea of the hackathon is to find digital ideas that can revolutionize the food industry. Food delivery process has seen lots of innovation in recent years, but it’s obvious there’s so much that can be done: better services related to what we consume daily, smart apps that automate the burden of manual calories intake and many more.
Over 100 participants from 20 countries hacked in Rainmaking Loft, one of the well known startup destinations in Berlin. It’s inspiring to see so many motivated hackers with various experience in so many industries. One of the reasons we love hacker events in Berlin is the amazing mix of talent and the diversity of the participants. There was a group of bright Ukrainians that travel by bus over 12 hours, people from Poland, Holland, UK, Switzerland to name a few.
Imagga was an API partner at the event. We’ve trained two special categorizers to help developers get the best out of our image recognition technology and to make sure they build amazing apps - meal classifier that can recognize over 300 cooked meals and food & veggies classifier.
Chris presented Imagga API at a dedicated workshop, talking about how you can use Imagga API to recgnize pictures of meals, fruites and veggies.
Amazing apps have been build during the weekend - from smart fridge that can find out what food is aging and needs to be thrown away and reordered to some quite creative ways to use food:
Neighborhood Cuisine - A mobile app that lets you get together with your neighbors to cook tasty recipes from your leftovers.
HelloFridge - the easiest way to save your ingredients from going bad by combining them to fit delicious recipes.
iTrash - the most intelligent trash can in the world
Surprise Meal - Provides surprise ideas for recipes.
Bulker - Early-morning bakery delivery that loves you
Super Quest - Food AR app for basic food education of children
Mint - A unique way to discover a new culture through music, art and food.
Vollkorntoast - New Shopping Experience
rcply - Search for recipes based on ingredients you already have at home
lekkerlekker - Your one stop shop for finding the best recipes and directly buy alls groceries you need.
VR-ify - Efficient food testing and prototyping process with VR and brain activity scanner
TraceYourFood - See the journey of the food you consume!
KeineWaste - KeineWaste connects food businesses and volunteers in real-time, converting food waste into food donations.
WTF: What the FOOD - So you'll never run out of müsli again.
Our favorite hack is Food Sherlok, developed by Team Baker Street ;-) Food Sherlok lets you take a photo of any ingredient and gives your nutrition details, recipe suggestions and the possibility to buy it online. The mobile app utilizes Imagga API for the image recognition part, Spoonacular API for the nutrition details and HellowFresh API for the recipe suggestions and possible food delivery.
Here you can find the code of the hack.
Imagga prize - GoPro camera, t-shirts and free API access went to Team Baker Street for their incredible work and valuable feedback.
“Getting an idea took some time, so the first hours we decided on an idea, but it took another hour until it was clear for me what I had to do and how it should look like. Uploading the image and resizing it on the frontend was also harder than I thought, just because some of the APIs expect form data for images."
You can’t do much for the limited time of the hackathon, but the team is already making some bold plans for Food Sherlock. They think it will be quite useful if they can generate location based sate for retailers, work on improving the image recognition service with data specific for different geographic locations, experiment with new ways to visualize the results (may be adding something like search heat map overlay) and offer text to speech recognition to make sure refugees can also use the app.
Hipster bar is where only hipsters are allowed! How do you reinforce that? With a physical doorman who’s job is to ruthlessly send back guys without beards, or, in the case of the Max Dovey’s project - using Imagga’s image recognition technology. The hipster bar was open to the public for the duration of WdW Festival 2015.
Let’s get into the details of this quite unique usage of Imagga’s powerful image recognition technology. To enter the bar, you need to stand in front of camera that snaps a photo of you and then sends it to Imagga servers. Then the tech analyses your look and as result returns how certain the system is you are a hipster.
If you are found over 90% hipster, the door of the bar will open and you can join great company of people that are hipster enough.
Hipster is quite loose term and usually is used to describe a subculture of people who attempt to keep up to date with the latest trend and remain 'hip'. These are men and women in their 20's and 30's that value progressive politics and independent thinking, and often have appetite for art and indie-rock & counterculture. Of course being hipster includes certain look - thick rimmed glasses, tight-fitting jeans, old-school sneakers, side-swept bangs and beards (men only).
Max Dovey, an artist from Rotterdam, who initiated the project, sourced thousands of images of hipsters to be used by our team to build a special hipster deep learning mode. The specific classifier was able to easily distinguish between snaps of hipsters and all the rest. Here’s how it actually worked:
Have another crazy idea? Don’t hesitate to try it out - with our custom training only the sky is the limit... if you are hip enough ;)
Weapons are controversial topic, especially in zones of conflict. A lot of photos of people holding weapons are posted daily in social media. Analyzing the concentration and the motives for posting this kind of photos can eventually hint to a trend or some potential problems.
In a recent post Justin Seitz showcases a simple hack for discovering photos of weapons on social media using Imagga’s image recognition API. Not all photos of people with weapons are being detected by our general tagging API, but there’s a simple explanation behind that - the API hasn’t been specifically trained for that task. Still, it’s quite amazing we are detecting various types of weapons in different context.
Applications of this are numerous: detecting concentration of weapon related photos and possible militarization in certain area, unauthorized/hazardous use of weapons, including by children to name a few.
Justin has also managed to improve the results by cropping some of the images in advance, so they are more concentrated on the eventual weapon. In the upcoming updates of our API this will be solved even better - we are releasing positional tagging of objects soon and besides having the actual position this will also improve the object-level recognition, stay tuned!
In addition to the very interesting use-case Justin’s article is a great practical guide on how to use the Imagga’s auto-tagging API so make sure to check it out.