The term “artificial intelligence” has been driving people’s imaginations wild even before 1955 when the term was coined to describe an emerging computer science discipline. Today the term includes a variety of technologies to improve the human life and the list is ever growing. Starting with Alexa and self-driving cars finishing with love robots, your newsfeed is constantly full of AI updates. Your newsfeed is also a product of (somewhat) well-implemented algorithm. The good news? Just like the rest of the AI technologies, your newsfeed is self-learning and constantly changing, trying to improve your experience. The bad news? A lot of people know why but nobody can really explain why the most advanced algorithms work. And that’s where things can go wrong. And that’s where things can go wrong.

The Good AI

The AI market is blooming. The profitable mix of media attention, hype, startups and adoption by enterprises is making sure that AI is a household topic. A Narrative Science survey found that 38% of enterprises are already using AI and Forrester Research predicted that in 2017 the investments in AI will grow by 300% compared with 2016.

But what good can artificial intelligence do today?

Natural language generation

This capability of AI is used to generate reports, summarize business intelligence insights and automate customer service, AI can use this ability to produce text from data.

Speech recognition

Interactive voice response systems and mobile applications rely on AI ability to recognize speech. It transcribes and transforms human speech into form usable by a computer application.

Image recognition

This has been already successfully used to detect problematic persons at airports, for retail, etc.

Virtual agents/chatbots

These virtual agents are used in customer service and support, smart home managers. These chatbot systems and advanced AI can interact with humans. There are machine learning platforms which can design, train and deploy models into applications, processes and other machines, by providing algorithms, APIs, development and training data.

Decision management for enterprise

Engines that use rules and logic into AI systems and are used for initial setup/training and ongoing maintenance and tuning? Check. This technology has been used for a while now for decision management by enterprise applications and assisting automated decision-making. There is also AI-optimized hardware with the power to process graphics and designed to run AI computational jobs.

AI for biometrics

On a more personal level, the use of AI in biometrics enables more natural interactions between humans and machines, relying on image and touch recognition, speech, and body language. By using scripts and other ways to automate human action to support efficient business processes, robots are capable of executing tasks or processes instead of humans.

Fraud detection and security

Natural language processing (NLP) uses and supports text analytics by understanding sentence structure and meaning, sentiment and intent through statistical and machine learning methods. It is currently used in fraud detection and security.

The “Black Box” of AI

At the beginning AI breached out in two directions: machines should reason according to rules and logic (everything is visible in the code); machines should use biology and learn from observing and experiencing (a program generates an algorithm based on example data). Today machines ultimately program themselves based on the latter approach. Since there is no hand-coded system which can be observed and examined, deep learning is particularly a “black box.”

It is crucial to make sure we know when failures in the AI occur because they will. In order to do that, we need to know how techniques like deep learning work. Recognizing abstract things. In simple systems, recognition is based on physical attributes like outlines and colour; on the next level- more complex things like basic shapes, textures, etc. The top level can recognize all the levels and the whole not just as a sum of its parts.

There is the expectation that these techniques will be used to diagnose diseases, make trading decisions and transform whole industries. But it shouldn’t happen before we manage to make deep learning more understandable especially to their creators and accountable for their uses. Otherwise there is no way to predict failures.

Today mathematical models are already being used to find out who is approved for a loan and who gets a job. But deep learning represents a different way to program computers.  “It is a problem that is already relevant, and it’s going to be much more relevant in the future,” says Tommi Jaakkola, a professor at MIT who works on applications of machine learning. “Whether it’s an investment decision, a medical decision, or maybe a military decision, you don’t want to just rely on a ‘black box’ method.”

Starting in the summer of 2018, the European Union will probably require that companies be able to explain decisions made by automated systems. Easy right? Not really: this task might be impossible if the apps and the websites use deep learning. Even if it comes to something simple like recommending products or playing songs. Those services are run by computers which have programmed themselves. Even the engineers who have build them will not be able to fully clarify the way the computers reach the results.

“It might be part of the nature of intelligence that only part of it is exposed to rational explanation. Some of it is just instinctual.”

With the advance of technology, logic and reason might need to step down and leave some room for faith. Just like human reasoning and logic, we can’t always explain why we’ve taken a decision. However, this is the first time we are dealing with machines, which are not understandable by even the people who engineered them. How will this influence our relationship with technology? A hand-coded system is pretty straightforward, but any machine-learning technology is way more convoluted. Yes, not all AI tech will be this difficult to understand, but deep learning is a black box by design.

AI works a bit like the neural network and its center- the brain: you can’t look inside it to find out how it works because a network’s reasoning is embedded in the behaviour of thousands of simulated neurons. These neurons are arranged into dozens or even hundreds of intricately interconnected layers. The first layer receives input and then performs calculations before giving an a new signal as output. The results are fed to neurons in the next layer and so on.

Because there are many layers in a deep network, they are able to recognize things at different levels of abstraction. If you want to build an app, let’s say “Not a HotDog” (“Silicon Valley,” anyone?), you need to know what  a hot dog looks like. A system might be designed to recognize hot dogs based on outlines or color. Higher layers will recognize more complex things like texture and details like condiments.

But just as many aspects of human behavior can’t be explained in detail, it might be the case that we won’t be able to explain everything AI does.  “Even if somebody can give you a reasonable-sounding explanation [for his or her actions], it probably is incomplete, and the same could very well be true for AI,” says Clune, of the University of Wyoming. “It might just be part of the nature of intelligence that only part of it is exposed to rational explanation. Some of it is just instinctual, or subconscious, or inscrutable.”

Just like civilizations have been built on a contract of expected behaviour, we might need to design AI system to respect and fit into our social norms. Whatever robot or a system we created, it is important that their decision-making is consistent with our ethical judgements.

The AI Future

Participants in a recent survey were asked about the most worrying notion about AI. The results were as expected: participants were most worried by the notion of a robot that would cause them physical harm. Naturally, machines with close physical contact like self-driving cars and home managers were viewed as risky. However, when it cоmes to statistics, languages, personal assistants: people are more than willing to use AI in everyday tasks. The many potential social and economic benefits from the technology depend on the environment in which they evolve, says the Royal Society.

A robot animated by AI is known as “embodiment.” Thus applications that involved embodiment were viewed as risky. As data scientist Cathy O’Neil has written, algorithms are dangerous if they posses scale, their working are a secret and their effects are destructive. Alison Powell, an assistant professor at the London School of Economics believes that this mismatch between perceived and potential risk is common with new technologies. “This is part of the overall problem of the communication of technological promise: new technologies are so often positioned as “personal” that perception of systematic risk is impeded.”

Philosophers, computer scientists and techies make the distinction between “soft” and “hard” AI. The main difference? Hard AI’s main goal is to mimic the human mind. As the Wall Street Journal and MIT lecturer Irving Wladawsky-Berger explained, soft AI’s main purpose is to be statistically oriented and use its computational intelligence methods to address complex problems based on the analysis of vast amounts of information using sophisticated algorithms. For most of us soft AI is already an everyday part of our daily routine: from the GPS to ordering food online. According to Wladawsky-Berger, hard AI is “a kind of artificial general intelligence that can successfully match or exceed human intelligence in cognitive tasks such as reasoning, planning, learning, vision and natural language conversations on any subject.”

AI is already used to build devices that cheat and deceive or to outsmart human hackers. It is quickly learning from our behavior and people are building robots who are so humanlike they might be our lovers. AI is also learning right from wrong. Mark Riedl and Brent Harrison from the School of Interactive Computing at the Georgia Institute of Technology are leading a team who is trying to instill human ethics to AIs by using stories. Just like in real life we teach human values to children by reading them stories, AI learns to distinguish wrong from right, bad from good. Just like civilizations have been built on a contract of expected behaviour, we might need to design AI system to respect and fit into our social norms. Whatever robot or a system we created, it is important that their decision-making is consistent with our ethical judgements.

free image recognition with imagga