image_recognition_brail

Image Recognition Revolutionizes the Online Experience for the Visually Impaired

People take seeing and technology for granted. For a specific group of internet users, the online experience is not so straightforward. The visually impaired need special assistance to experience the digital world. There are a few diverse low-vision aids but generally, they can be divided into two categories: translating visual information into alternative sensory information (sound or touch) and adapting visual transformation to make it more visible. However, the bigger problem remains how to help people who are blind. The emerging technology for assistance in this category uses image processing techniques to optimize the visual experience. Today we will be looking at how image recognition is revolutionizing the online experience for the visually impaired.

Blind Users Interacting with Visual Content

Let’s stop for a second to consider the whole online experience for the visually impaired. What happens when a regular person sees a webpage? He scans it, clicks links or fills in page information. For the visually impaired, the experience is different. They use a screen reader: a software that interprets a photo or image on the screen and reads it to the user. However, to narrate each page element in a fixed order including skipping is not easy. Sometimes there is a vast difference between the visual page elements (buttons, banners, etc.) and the alt-text read by the screen reader. SNS pages (social networking service) with unstructured visual elements and an abundance of links, with horizontally and vertically organized content make listening to the screen reader more confusing.

Interacting with Social Visual Content

SNSs make it easy to communicate through various types of visual content. To fully engage with images, visually impaired people need to overcome accessibility challenges associated with the visual content through workarounds or with outside help.

Advancements in artificial intelligence are allowing blind people to identify and understand the visual content. Some of them include image recognition, tactile graphics, and crowd-powered systems.

Facebook has already algorithmically generated useful and accurate descriptions of photos on a larger scale without latency in the user experience. They provide visuals a description as image alt-text, an HTML attribute designed for content managers to provide the text alternative for images.

Web Accessibility  Today

We might think that web accessibility is a universal thing, but web designers do not always have the resources to devote to accessibility or do not see the value in making sites accessible. A 2-dimensional web page translated into a 1-dimensional speech stream is not easy to decipher. One of the most annoying things is that the majority of websites have insufficient text labeling of graphic content, concurrent events, dynamic elements, or infinitely scrolling pages (i.e. a stream of feeds). Thus, many websites continue to be inaccessible through screen readers. Even the ones that are intended for universal access: library websites, university websites, and SNSs.

The World Wide Web Consortium (W3C), an international community where Member organizations and the public work together to develop Web standards, created accessibility standards.  Led by Web inventor Tim Berners-Lee and CEO Jeffrey Jaffe, W3C's mission is to lead the Web to its full potential.

Solutions Helping Visually Impaired Users

Aipoly
There is a new iPhone app which uses machine learning to identify objects for visually impaired people without an Internet connection. The free image-recognition app is called Aipoly and is making it easier for people to recognize their surroundings. How does it work? You simply point the phone’s rear camera at whatever you want to identify and it speaks what it sees. The app can identify one object after another as the user moves the phone around and it doesn’t require picture taking.The app can be helpful not only to people with impaired vision but also to the ones trying to learn a new language.

Aipoly cofounder Simon Edwardsson says it recognizes images by using deep learning, which is a machine-learning technique inspired by studies of the brain. This is the same technology used by Facebook for recognizing faces and Google for searching images. The app breaks down the image into different characteristics like lines, patterns, curves, etc. and uses them to determine the likelihood of that image to be a specific object. The app works fine for objects around the office. So far it can recognize around 1,000 objects, which is more than enough.

Banknote-reader (b-reader)
The banknote reader is a device that helps the visually impaired to recognize money. The banknote goes into the b-note holder for scanning and recognition (orientation doesn’t really matter), it gets photographed and sent securely to the cloud. There an Imagga-trained custom classifier recognizes the nominal value and returns the information to the b-note device. Then it plays a pre-recorded .mp3 file with the value if it is recognized. The project is part of TOM (Tikkun Olam Makers), a global movement of communities connecting makers, designers, engineers and developers with people with disabilities to develop technological solutions for everyday challenges. On the web platform, you can find full specs of the b-note prototype, including building instructions and camera code used for calling Images API, so that you can make a device like it for around 100 Euro or 115 USD.

LookTel
This is a combination of a Smartphone and advanced “artificial vision” software to create a helpful electronic assistant for anyone who is visually impaired or blind. It can be used to automatically scan and identify objects like money, packaged goods, DVDs, CDs, medication bottles, and even landmarks. All it takes is to point the device video camera at the object and the device pronounces the name quickly and clearly. It can be taught to identify all the objects and landmarks around you. With a little extra help, the LookTel can be a helpful assistant. It also incorporates a text reader which allows users to get access to print media.

Seeing AI
This is a smartphone app that uses computer vision to describe the world and is created by Microsoft. Once the app is downloaded, the user can point the camera at a person and it will announce who the person is and how they are feeling. The app also works with products. It is done by artificial intelligence running locally on the phone. So far the app is available for free in the US for iOS. It is unclear when the rest of the world and Android users will be able to download it.

The app works well for recognizing familiar people and household products (scanning barcodes). It can also read and scan documents and recognize US currency. This is not a small feat because the dollar bills are basically the same size and color, regardless of their value, so spotting the difference is sometimes difficult for the visually impaired. The app is using neural networks to identify objects, which is the same technology used for self-driving cars, drones, and others. The most basic functions take place on the phone itself, however most features require a connection.

Next  Challenges for Full Adoption

Facebook users upload more than 350 million photos a day. Websites are relying mostly on images and less on the text. Sharing visuals has become a major part of the online experience. So using screen readers and screen magnifiers on mobile and desktop platforms help the visually impaired. However, more efforts need to be put to make the web more accessible through design guidelines, designer awareness, and evaluation techniques.

The most difficult challenge ahead is the evaluation of the effectiveness of image processing. It needs to be held ultimately to the same standards as other clinical research in low vision. Image processing algorithms need to be tailored specifically to disease entities and be available on a variety of displays, including tablets. This field of research has the potential to deliver great benefits to a large number of people in short period of time.


Are Machines Already Smarter Than Us

Intelligence has always been an amazing topic for conversations: whether it’s about discussing what it is precisely or other people’s lack of it, it never fails to provide food for thought. Now with the rise of artificial intelligence, we have one more topic to debate, make predictions about and feel excited (or threatened) by. So far we have taught machines to draw, drive cars, write poems, beat humans playing Go, and even chat with us. AI is obviously getting smarter, but is it already smarter than us?

In 2002, Mitchell Kapor, co-founder of the Electronic Frontier Foundation and the first chair at Mozilla, and Ray Kurzweil, author, computer scientist, inventor and futurist  who works for Google, established a $20,000 wager. The bet was over whether a computer would pass the Turing Test by 2029. They called it “A Long Bet.” Kapor bet against a computer passing the Turing Test by 2029, while Kurzweil believed it would happen. Has the bet been resolved in 2018? Let’s take a deeper look.

AI: The Origins

Let’s go all the way back to ancient history. Just think about all the myths and stories about artificial beings who get their consciousness by a divine power. The seeds of AI were planted by philosophers who tried to describe the process of human thinking as the mechanical manipulation of symbols. In 1308, the Catalan poet and theologian Ramon Llull published Ars generalis ultima (The Ultimate General Art), which perfected his method of using paper-based mechanical means to create new knowledge from combination of concepts. Following that in 1666, mathematician and philosopher Gottfried Leibniz published On the Combinatorial Art , which proposed an alphabet of human thought and argued that all ideas are nothing but combinations of a relatively small number of simple concepts. All of this culminated with the invention of the programmable digital computer in the 1940s. So scientists had the base to start discussing the possibility of building an electronic brain.

The term “artificial intelligence” was coined in a proposal for a “2 month, 10 man study of artificial intelligence” in August 1955 in Dartmouth College. The workshop involved John McCarthy (Dartmouth College), Marvin Minsky (Harvard University), Claude Shannon (Bell Telephone Laboratories) and  Nathaniel Rochester (IBM).  The workshop took place in 1956 and is considered the official birth of the new fied. In 1959 Arthur Samuel coined the term “machine learning” when he was trying to program a computer to learn to play a better game of checkers better than the person who wrote the program.

AI: The Test

In the far 1950, Alan Turing developed an actual test, which would help determine a machine’s ability to exhibit intelligent behavior compared to that of a human. The test involved a human evaluator who would judge natural language conversations between a human and a machine designed to generate human-like responses. The judge will be aware that a machine is involved. The conversation would be limited to a text-only channels such as a computer keyboard and screen. If the evaluator cannot reliably tell the machine from the human, the computer passes the test. In this test there are no right and wrong answers- just answers close to human speech.

The test has been introduced in Turing’s paper “Computing Machinery and Intelligence.” The first sentence states: “I propose to consider the question, 'Can machines think?’” But thinking is too difficult to define so Turing replaces this question with another: “"Are there imaginable digital computers which would do well in the imitation game?" Turing believed that the new question can be answered.

AI: The Bet

In 2014 a computer successfully convinced a panel of judges that it was human. Thus it passed the Turing Test. The test was held by the University of Reading and the organization announced that for the first time a computer passed. The computer’s name was Eugene Goostman and it tricked the judges 33% of the time. But did it really helped Kapoor win the bet so that Kurzweil owes him $20,000?

Yes, Eugene Goostman passed the Turing Test and fooled the judges more than 30% of the time in their five-minute conversations. No, Kurzweil doesn’t owe Kapoor $20,000. Yet. The bet had explicit rules and the experiment at the University of Reading didn’t meet all of the listed criteria. For example, to help Kapoor win, a computer needs to have a conversation of at least eight hours, which means the computer will need to convince two out of three judges.

But why should Kapoor be worried?

Machines are getting better at everything we are teaching them to be. What makes machines smarter? Seth Shostak, the former director of the Search for Extraterrestrial Intelligence Institute (SETI), believes that we can build computers that can beat humans at specific tasks (like winning the game Go). The machines can’t do everything better, but he thinks that eventually we will design AI that is as complex and intelligent as a human brain.

"But the assumption is that that will happen in this century. And if it does happen, the first thing you ask that computer is: Design something smarter than you are," says Shostak. "Very quickly, you have a machine that's smarter than a human. And within 20 years, thanks to this improvement in technology, you have one computer that's smarter than all humans put together."

AI is learning quickly. Just one recent example is The AI Hacker: in 2016 the Darpa Cyber Grand Challenge hosted the first hacking contest between a pit bot against bot. Designed by seven teams of security researchers from across academia and industry, the bots were asked to play offense and defense, fixing security holes in their own machines while exploiting holes in the machines of others.  

Not to mention the infamous story which the more dramatic amongst us (or the Black Mirror fans) saw as the beginning of the reign of AI over humans: that time when Facebook had to shut down two chatbots, just because no one understood what they were talking about.  The researchers didn’t seem to worried about it. "There was no reward to sticking to English language," Dhruv Batra, Facebook researcher, told FastCo. "Agents will drift off understandable language and invent codewords for themselves.”

In the meantime Google is feeding its AI with unpublished books and, in return, the AI is composing mournful poems. And if you’ve played with the AI-powered tool that Google released in 2016, you’ve actually helped it learn how to draw. The program is called Sketch-RNN and it draws pretty well...for a machine. The drawings are basic, but they are not what is important. The method used to create them can be quite useful. It is paving the way for AI programs which can be used as creative ads for designers, architects and artists.

We, on the other hand, have focused on the image recognition abilities of AI. A while ago we asked you to play in the Clash of Tags. Players were presented with two sets of images for a given text tag and had to vote which set was describing the image better. It turned out that machines were almost as good as humans. So for now, the result is even. But the battle is not over.

Human: Intelligence?

So what is intelligence? According to Einstein, “The true sign of intelligence is not knowledge but imagination” Socrates said, “I know that I am intelligent, because I know that I know nothing.” Philosophers got created in the ancient search finding the true measure of intelligence and meaning. Today neuroscientists try to answer questions about intelligence from a scientific perspective. It is widely accepted that there are different types of intelligence—analytic, linguistic, emotional, to name a few—but psychologists and neuroscientists disagree over whether these intelligences are linked or whether they exist independently from one another.

In the meantime, computers will be getting smarter. Yes, they can process certain kinds of information much faster than any of us can. Computers learn more quickly and narrow complex choices to the most optimal ones. They have better memories and can analyze huge amount of information. Computers can calculate and perform tasks without stopping.  On the other hand, humans are better at making decisions and solving problems. Humans are capable of experiencing life.  We have creativity, imagination and inspiration. Computers replicate tasks, but they can’t create. Yet.


AI Policies: What is the world doing to make them secure

A couple of months ago the internet went berserk with the news of Facebook pulling the plug out on two bots, which started communicating in their own language. Imaginations and headlines went wild with the possibilities: malicious AI is taking over, the doomsday is here with the bots of the Apocalypse. Although the real story was quite different (the bots were turned off because they were designed to communicate with humans, not with each other, thus they were not delivering the expected results), the outcome was simply panic.What we can learn from this is that humans are afraid of their own creation - the artificial intelligence.

AI can transform gargantuan amounts of complex information into insight. It has the potential to present solutions, reveal secrets and solve problems. But before we get to the good part, we need to take care of development and deployment. In order to be able to use them, AI systems need to have the same ethical principles, moral values, professional codes, and social norms we follow. Some of us are excited about the opportunities AI provides, others are suspicious. To become widespread, AI needs to be designed in a way that allows people to understand it, use it and trust it. To ensure the acceptance of AI, public policies should help society deal with AI’s inevitable failures and facilitate adaptation.

Where are we now?

Policies can help AI’s progress or hamper it. We are witnessing a shift in the bottleneck to using AI products from technology to policymaking. Regulation is slow to respond to the cost of compliance or the adoption and development of innovations. Thorough and well-thought policies can influence the rate and direction of innovation by creating stimulus for the private sector. In order to grasp the current situation, we will take a look at the major players in AI technologies whose decisions will be influencing the future of policy making. Yes, you guessed it: China and USA (Europe also deserves a mention). If a country with AI research expertise wishes to participate as a producer it should be ready for tense labor market competition from the U.S. and China.

USA

On October 12, 2016, President Obama’s Executive Office published two reports that laid out its plans for the future of artificial intelligence. The report entitled “Artificial Intelligence, Automation and the Economy,” concluded that AI-driven automation suggests the need for aggressive public policies and a more robust safety net in order to combat labor disruption. The report elaborates on the topics of the previous one: Preparing for the Future of Artificial Intelligence, which recommended the publishing of a report on the economic impacts of artificial intelligence. The focus of AI capabilities is the automation of tasks which have required manual labor, which will provide new possibilities for the economy. However, the disruption of the current livelihood of some people is inevitable. The report’s objective is to find how to increase the benefits and mitigate the costs.

AI isn’t a science project; it’s commercially important.

The report proposes that three broad strategies are followed to ease the AI automation in the economy: first, invest and develop AI; second, educate and train workers for the future jobs, and, finally, aid workers in the transition and empower them to ensure broadly shared growth. Since AI automation will transform the economy, policymakers need to create or update, strengthen and adapt policies. The primary economic effects under consideration are the beneficial contribution to productivity growth, the new skills that the job market will demand (especially higher-level technical skills); the disbalance the impact of AI will create on wage and education levels, job types and locations; the loss of jobs which might be long term, depending on the policy responses.

China

For the past four years, the US and China have been heavily investing in AI especially compared to other countries. Just till recently, the US seemed like the leader in the tech race, but 2 years ago China has outdone the US in research output. China is emerging as a leader, not a follower. Government is backing research and development and thus driving China’s economy forward. The total value of AI industries will surpass 1 trillion yuan ($147.80 billion).

On July 20, China’s State Council issued the “Next Generation Artificial Intelligence Development Plan” (新一代人工智能发展规划), which articulates an ambitious agenda for China to lead the world in AI. China intends to pursue a “first-mover advantage” to become the “premier global AI innovation center” by 2030.  And Wan Gang, the Minister of Science and Technology, stated that China plans to launch a national AI plan, which will strengthen AI development and application, introduce policies to contain risks associated with AI, and work toward international cooperation. The plan will also provide funds to back these endeavors up.

The guideline states that developing AI is a “complicated and systematic project” and needs a coordinated AI innovation system- not only for the technology, but for the products as well. It goes on stating that AI in China should be used to promote the country’s technology, social welfare, economy, provide national security, and help the world in general.

The guideline advises that trans-boundary research needs to connect AI with subjects like psychology, cognitive science, mathematics, and economics. As far as platform construction goes, open-source computing platforms should promote coordination among different hardware, software and clouds. This will naturally increase the need of more AI professionals and scientists should who need to be prepared for work.

Europe

The International Business Machines Corporation (IBM) is actively engaged in global discussion about making AI ethical and beneficial. It is working not only internally, but with collaborators and competitors as well.

Because of the constant change in development, AI is making it difficult for any regulation agency to keep up with the progress. This is making meaningful and timely guidance almost impossible. On the other hands, issues like data privacy and ownership have been discussed in the EU. An algorithm for transparency and accountability has also been considered.

In 2018, the General Data Protection Regulation will be rolled out in the EU. It will restrict automated individual decision-making (algorithms making decisions on user-level predictors) that affects users. This law provides the “right to explanation:” a user can request an explanation why the algorithm has chosen him/her.

Safety is important, but so are fairness, equality and inclusiveness, which should be included in the AI systems. That’s why we need policies and regulations: to ensure AI is being used to the benefit of all. IBM is working with governments, media, regulatory agencies and industry sectors: everyone, who is willing to have a reasonable discussion on the ethical issues of AI. The aim is to clearly identify the potential and limits of AI and how to make the best use of it.

Who is it up to?

On a shorter term, it is up to the policymakers and lawyers. In the near future, government representatives need to have the technical expertise in AI to justify decisions. More research is needed on the security, privacy and the societal implications of AI used. For example, instead of cross-examining a person, lawyers may need to cross-examine an algorithm.

As with everything technological, there is a definite uncertainty about how strongly these effects will be felt. Maybe AI won’t have a large effect on the economy. But the other option is for the economy to experience a larger shock: changes in the labor market, employees without relevant work skills and in a desperate need of a training. Although no definitive decision could be taken or a deadline for policies setting, continued involvement of the government with the industry, technical and policy experts will play an important role.

free image recognition with imagga


Artificial Intelligence Becoming Human. Is That Good or Bad?

The term “artificial intelligence” has been driving people’s imaginations wild even before 1955 when the term was coined to describe an emerging computer science discipline. Today the term includes a variety of technologies to improve the human life and the list is ever growing. Starting with Alexa and self-driving cars finishing with love robots, your newsfeed is constantly full of AI updates. Your newsfeed is also a product of (somewhat) well-implemented algorithm. The good news? Just like the rest of the AI technologies, your newsfeed is self-learning and constantly changing, trying to improve your experience. The bad news? A lot of people know why but nobody can really explain why the most advanced algorithms work. And that’s where things can go wrong. And that’s where things can go wrong.

The Good AI

The AI market is blooming. The profitable mix of media attention, hype, startups and adoption by enterprises is making sure that AI is a household topic. A Narrative Science survey found that 38% of enterprises are already using AI and Forrester Research predicted that in 2017 the investments in AI will grow by 300% compared with 2016.

But what good can artificial intelligence do today?

Natural language generation

This capability of AI is used to generate reports, summarize business intelligence insights and automate customer service, AI can use this ability to produce text from data.

Speech recognition

Interactive voice response systems and mobile applications rely on AI ability to recognize speech. It transcribes and transforms human speech into form usable by a computer application.

Image recognition

This has been already successfully used to detect problematic persons at airports, for retail, etc.

Virtual agents/chatbots

These virtual agents are used in customer service and support, smart home managers. These chatbot systems and advanced AI can interact with humans. There are machine learning platforms which can design, train and deploy models into applications, processes and other machines, by providing algorithms, APIs, development and training data.

Decision management for enterprise

Engines that use rules and logic into AI systems and are used for initial setup/training and ongoing maintenance and tuning? Check. This technology has been used for a while now for decision management by enterprise applications and assisting automated decision-making. There is also AI-optimized hardware with the power to process graphics and designed to run AI computational jobs.

AI for biometrics

On a more personal level, the use of AI in biometrics enables more natural interactions between humans and machines, relying on image and touch recognition, speech, and body language. By using scripts and other ways to automate human action to support efficient business processes, robots are capable of executing tasks or processes instead of humans.

Fraud detection and security

Natural language processing (NLP) uses and supports text analytics by understanding sentence structure and meaning, sentiment and intent through statistical and machine learning methods. It is currently used in fraud detection and security.

The “Black Box” of AI

At the beginning AI breached out in two directions: machines should reason according to rules and logic (everything is visible in the code); machines should use biology and learn from observing and experiencing (a program generates an algorithm based on example data). Today machines ultimately program themselves based on the latter approach. Since there is no hand-coded system which can be observed and examined, deep learning is particularly a “black box.”

It is crucial to make sure we know when failures in the AI occur because they will. In order to do that, we need to know how techniques like deep learning work. Recognizing abstract things. In simple systems, recognition is based on physical attributes like outlines and colour; on the next level- more complex things like basic shapes, textures, etc. The top level can recognize all the levels and the whole not just as a sum of its parts.

There is the expectation that these techniques will be used to diagnose diseases, make trading decisions and transform whole industries. But it shouldn’t happen before we manage to make deep learning more understandable especially to their creators and accountable for their uses. Otherwise there is no way to predict failures.

Today mathematical models are already being used to find out who is approved for a loan and who gets a job. But deep learning represents a different way to program computers.  “It is a problem that is already relevant, and it’s going to be much more relevant in the future,” says Tommi Jaakkola, a professor at MIT who works on applications of machine learning. “Whether it’s an investment decision, a medical decision, or maybe a military decision, you don’t want to just rely on a ‘black box’ method.”

Starting in the summer of 2018, the European Union will probably require that companies be able to explain decisions made by automated systems. Easy right? Not really: this task might be impossible if the apps and the websites use deep learning. Even if it comes to something simple like recommending products or playing songs. Those services are run by computers which have programmed themselves. Even the engineers who have build them will not be able to fully clarify the way the computers reach the results.

“It might be part of the nature of intelligence that only part of it is exposed to rational explanation. Some of it is just instinctual.”

With the advance of technology, logic and reason might need to step down and leave some room for faith. Just like human reasoning and logic, we can’t always explain why we’ve taken a decision. However, this is the first time we are dealing with machines, which are not understandable by even the people who engineered them. How will this influence our relationship with technology? A hand-coded system is pretty straightforward, but any machine-learning technology is way more convoluted. Yes, not all AI tech will be this difficult to understand, but deep learning is a black box by design.

AI works a bit like the neural network and its center- the brain: you can’t look inside it to find out how it works because a network’s reasoning is embedded in the behaviour of thousands of simulated neurons. These neurons are arranged into dozens or even hundreds of intricately interconnected layers. The first layer receives input and then performs calculations before giving an a new signal as output. The results are fed to neurons in the next layer and so on.

Because there are many layers in a deep network, they are able to recognize things at different levels of abstraction. If you want to build an app, let’s say “Not a HotDog” (“Silicon Valley,” anyone?), you need to know what  a hot dog looks like. A system might be designed to recognize hot dogs based on outlines or color. Higher layers will recognize more complex things like texture and details like condiments.

But just as many aspects of human behavior can’t be explained in detail, it might be the case that we won’t be able to explain everything AI does.  “Even if somebody can give you a reasonable-sounding explanation [for his or her actions], it probably is incomplete, and the same could very well be true for AI,” says Clune, of the University of Wyoming. “It might just be part of the nature of intelligence that only part of it is exposed to rational explanation. Some of it is just instinctual, or subconscious, or inscrutable.”

Just like civilizations have been built on a contract of expected behaviour, we might need to design AI system to respect and fit into our social norms. Whatever robot or a system we created, it is important that their decision-making is consistent with our ethical judgements.

The AI Future

Participants in a recent survey were asked about the most worrying notion about AI. The results were as expected: participants were most worried by the notion of a robot that would cause them physical harm. Naturally, machines with close physical contact like self-driving cars and home managers were viewed as risky. However, when it cоmes to statistics, languages, personal assistants: people are more than willing to use AI in everyday tasks. The many potential social and economic benefits from the technology depend on the environment in which they evolve, says the Royal Society.

A robot animated by AI is known as “embodiment.” Thus applications that involved embodiment were viewed as risky. As data scientist Cathy O’Neil has written, algorithms are dangerous if they posses scale, their working are a secret and their effects are destructive. Alison Powell, an assistant professor at the London School of Economics believes that this mismatch between perceived and potential risk is common with new technologies. “This is part of the overall problem of the communication of technological promise: new technologies are so often positioned as “personal” that perception of systematic risk is impeded.”

Philosophers, computer scientists and techies make the distinction between “soft” and “hard” AI. The main difference? Hard AI’s main goal is to mimic the human mind. As the Wall Street Journal and MIT lecturer Irving Wladawsky-Berger explained, soft AI’s main purpose is to be statistically oriented and use its computational intelligence methods to address complex problems based on the analysis of vast amounts of information using sophisticated algorithms. For most of us soft AI is already an everyday part of our daily routine: from the GPS to ordering food online. According to Wladawsky-Berger, hard AI is “a kind of artificial general intelligence that can successfully match or exceed human intelligence in cognitive tasks such as reasoning, planning, learning, vision and natural language conversations on any subject.”

AI is already used to build devices that cheat and deceive or to outsmart human hackers. It is quickly learning from our behavior and people are building robots who are so humanlike they might be our lovers. AI is also learning right from wrong. Mark Riedl and Brent Harrison from the School of Interactive Computing at the Georgia Institute of Technology are leading a team who is trying to instill human ethics to AIs by using stories. Just like in real life we teach human values to children by reading them stories, AI learns to distinguish wrong from right, bad from good. Just like civilizations have been built on a contract of expected behaviour, we might need to design AI system to respect and fit into our social norms. Whatever robot or a system we created, it is important that their decision-making is consistent with our ethical judgements.

free image recognition with imagga