Available Now: Banknote Reader (b-reader) for Visually Impaired People

TOM (Tikkun Olam Makers) is a global movement of communities connecting makers, designers, developers and engineers with people with disabilities (aka – ‘Need-Knowers’) to develop technological solutions for everyday challenges. Designs are free and available for any user around the world to adapt for their needs! Tikkun Olam (Hebrew for “repair the world”) is a Jewish concept defined by acts of kindness performed to perfect or repair a world that is often imperfect.

Each TOM event takes 3 days in the form of a makeathon (hackathon for makers) during which teams of makers, designers and engineers work together to create a solution for a particular need knower, or a group of such people. TOM has started in Israel and more than 20 makeathons have been organised around the world in the last 3 years.

From 20 April to 23 April 2017 the first ever TOM makeathon in Europe was organised in Sofia, Bulgaria. TOM Bulgaria organised the event in Smart Fab Lab (the first and only fab lab in Bulgaria), with the idea to change the world for 6 Bulgarian need knowers in 72 hours. You can see the highlights and results of this epic hackathon in the next short film.

Imagga committed to help blind people recognize Bulgarian banknotes

Anita (29) and Silvia (35) are blind and need help recognizing Bulgarian notes when they go shopping daily. Bulgarian banknotes have special embossed signs in one of their corners but these are impossible to be felt and recognised by most blind people, especially if the note has been in circulation for a long time.

Silvia’s and Anita’s challenge was to find a solution so they can recognise the banknotes and the currency.

A multidisciplinary team (Cash Vision) of people from academia and industry was formed to tackle this big challenge, including Stavri Nikolov (Imagga’s co-founder and Research Director) and Georgi Kostadinov (Imagga’s Core Technology Lead, who supported the Cash Vision team remotely). Cash Vision team members were (from left to right) Antony, Nesho, Anita, Angel, Nikolay, Stavri, Silvia (in the previous photo) and Georgi (remotely).

Initially, the team thought of creating a mobile phone app, but quickly found out it was challenging for Anita and Silvia to accurately aim their phone’s camera at the banknote and take a photo that is good for recognition, while holding both banknote and phone in their hands. Then we decided to create a small box, like a portable mini scanner, that functions as banknote reader, naming it the b-reader. It uses a Raspberry Pi with a camera sensor that is held in a specially designed 3D-printed housing. When the banknote is put in the b-note holder, the camera captures the central part of the bill and uses image recognition to identify its nominal value. The first working b-note prototype was constructed in 48 hours with extensive tests on tens of banknotes (2, 5, 10, 20, 50 and 100 BGN banknotes) were done in the last 24 hours.

Further tests to improve the performance and reliability of the reader were done with other currencies: monopoly money, paper cuts, journal cuts and much more to assure system integrity.

How does b-note work?

Once the banknote is put in the b-note holder for scanning and recognition (irrespective of its orientation), it is photographed and a region of it is sent securely to the cloud, where a specially trained Imagga custom classifier recognises its nominal value and returns this information to the b-note device. The device then plays a pre-recorded .mp3 file pronouncing the banknote value or if it is not recognised with sufficient confidence, saying ‘Unidentified object’. The audio files with the banknote nominal values have been recorded by Anita and Silvia themselves, so they can easily recognise their voice even in noisy environments.

The b-note system was subsequently tested with many more banknotes at various events and internally by the Cash Vision team. Two b-note devices are being optimised (better casing and smaller electronics) and will be given to Silvia and Anita to use for real in the autumn.

Ever since the TOM Bulgaria makeathon, the Cash Vision team has received many requests from visually impaired people in Bulgaria who are interested in using b-note daily. Because of the limited time we had, only the bigger challenge of recognizing notes was addressed. However, we are considering how b-note can be extended to recognise other objects in the future that are important for blind people.

On TOM Global’s web platform you can find full specs of the b-note prototype, including building instructions and camera code used for calling Imagga’s API, so that you can make a device just like it for around 100EUR/115USD (cost price).

You can’t fix the world in 72 hours, but you can surely try to hack it and make it better! Leave your thoughts and recommendations below.

free image recognition with imagga


Imagga Named As One of the IDC 2016 Worldwide Image Analytics Innovators

Imagga is recognized as one of the 3 pioneering players in the worldwide image analytics market. IDC’s 2016 Innovators report acknowledges companies that offer an inventive technology and/or groundbreaking new business model.

Imagga stands out with the possibility to offer custom image recognition training using custom-provided data for training sets, according to the prestigious report. Thanks to the flexible training model, customers are offered unprecedented opportunity to make sense of their image content and use the insights for analytics, understanding customers or better monetization strategies. Depending on the complexity of the training model, it takes form a day to couple of days for the actual training. Customers are given visual tools to evaluate the results and decide if fine-tinning is needed for greater performance.

According to Carrie Solinger, senior research analyst at Cognitive Systems and Content Analytics “Application of natural language processing and machine learning technologies have advanced image analytics’ cost effectiveness and accuracy, exponentially”. Services as Imagga enable business to harness the power of machine learning and do what once was done manually with great expense of manpower, or was impossible due to time restrictions. Real time (or near real time) image analytics opens up totally new horizon for companies to optimize their business decisions with direct effect on productivity and business results.

You can download this report from IDC here.

free image recognition with imagga


Batch Image Processing From Local Folder Using Imagga API

Batch Upload of Photos for Image Recognition

This blog post is part of series on How-Tos for those of you who are not quite experienced and need a bit of help to set up and use properly our powerful image recognition APIs.

In this one we will help you to batch process (using our Tagging or Color extraction API) a whole folder of photos, that reside on your local computer. To make that possible we’ve written a short script in the programming language Python: https://bitbucket.org/snippets/imaggateam/LL6dd

Feel free to reuse or modify it. Here’s a short explanation what it does. The script requires the Python package, which you can install using this guide.

It uses requests’ HTTPBasicAuth to initialize a Basic authentication used in Imagga’s API from a given API_KEY and API_SECRET which you have to manually set in the first lines of the script.

There are three main functions in the script - upload_image, tag_image, extract_colors.

    • upload_image(image_path) - uploads your file to our API using the content endpoint, the argument image_path is the path to the file in your local file system. The function returns the content id associated with the image.
  • tag_image(image, content_id=False, verbose=False, language='en') - the function tags a given image using Imagga’s Tagging API. You can provide an image url or a content id (from upload_image) to the ‘image’ argument but you will also have to set content_id=True. By setting the verbose argument to True, the returned tags will also contain their origin (whether it is coming from machine learning recognition or from additional analysis). The last parameter is ‘language’ if you want your output tags to be translated in one of Imagga’s supported 50 (+1) languages. You can find the supported languages from here - http://docs.imagga.com/#auto-tagging
  • extract_colors(image, content_id=False) - using this function you can extract colors from your image using our Color Extraction API. Just like the tag_image function, you can provide an image URL or a content id (by also setting content_id argument to True).

Script usage:

Note: You need to install the Python package requests in order to use the script. You can find installation notes here.

You have to manually set the API_KEY and API_SECRET variables found in the first lines of the script by replacing YOUR_API_KEY and YOUR_API_SECRET with your API key and secret.

Usage (in your terminal or CMD):

python tag_images.py <input_folder> <output_folder> --language=<language> --verbose=<verbose> --merged-output=<merged_output> --include-colors=<include_colors>

The script has two required - <input_folder>, <output_folder> and four optional arguments - <language>, <verbose>, <merged_output>, <include_colors>.

  • <input_folder> - required, the input folder containing the images you would like to tag.
  • <output_folder> - required, the output folder where the tagging JSON response will be saved.
  • <language> - optional, default: en, the output tags will be translated in the given language (a list of supported languages can be found here: http://docs.imagga.com/#auto-tagging)
  • <verbose> - optional, default: False, if True the output tags will contain an origin key (whether it is coming from machine learning recognition or from additional analysis)
  • <include_colors> - optional, default: False, if True the output will also contain color extraction results for each image.
  • <merged_output> - optional, default: False, if True the output will be merged in a JSON single file, otherwise - separate JSON files for each image.

Imagga Partners with Aylien

aylien magga partnership

We are super excited to announce our partnership with AYLIEN - a natural language processing platform, that will make possible to add text analytics capabilities to our image recognition and analytics APIs. We believe this partnership will help users of both services better understand their multimedia content and do way more with it.

AYLIEN Text API is a package consisting of eight different Natural Language Processing, Information Retrieval and Machine Learning APIs that help developers extract meaning and insight from text documents. It can be applied in Ad-Targeting, Media Monitoring and Social Listening projects.

Imagga’s Image Recognition API utilizes machine learning, image recognition and deep learning algorithms to identify over 6,000 distinct objects and concepts and return relevant keywords that best describe what’s in the images.

Currently Imagga’s image analysis endpoint is being added to ALYIEN’s Natural Language Processing API, giving developers the ability to analyze text and images in one API.

test image tagging

Having access to two powerful technologies in a single API creates endless opportunities for businesses that need to deal with large volumes of user generated content. Users rarely input or share just text or images, so being able to analyze and understand both at once, gives amazing new opportunity for any business to distribute and monetize content.

Together with AYLIEN we’ve been testing how our technologies can compliment each other for some time and the results were very exciting. Text and images are different but complement well each other in many cases and applications.

You can try the new hybrid image and text analysis service here. There’s nice demo to play with before you are finally sold (you can see the results for some sample images, but also can upload your own)


Demystifying Image SaaS Solutions (Infographic)

In most cases when people hear we do image processing & analysis, they exclaim “Oh, I see, face detection! That’s cool!” Well, that’s not all we can do, definitely not the most sexiest of the currently available image technologies but by far the most popular. This is why we decided to do an overview of various image understanding technologies, what integration options are available, and what business models are currently used in the Image Software-as-a-Service world. What better way to convey such message than with an image

ImageSaaS demystifying infographic

Feel free to spread the word and share the infographic! If you decide to share only the image and not the whole blog post, please link back to it somehow.

For your convenience, here are links to some of the key image SaaS players: Blitline, Chute, Cloudinary, Aviary, Recognize.im, Pixolution, Imagga, IQ Engines, Kooba, Lambda Labs, LTU technologies, Idee Inc.

If you know or you are part of some significant image SaaS provider that we’ve missed – please, comment here and we may include it in the next version of the infographic