Robotics and AI

AI for Dummies

6 March 2017 | Written by Andrea Geremicca

Usually the articles of this type begin with the academic definition taken from Wikipedia, but this is not the case, too many different definitions but none of them suit to really understand, for a neophyte like me, what it is and how artificial intelligence works.

In several articles I noticed that the AI programs are often referred as AI Agent, an object that can understand the data of the surrounding environment through sensors and to affect the state through its actuators. Everything that happens between these two moments is the true “core” of artificial intelligence and is called strategy control. The operation appears simple enough, the agent receives signals from the outside world through the sensors, then decides how to act, based on what it has decided, sets in motion its actuators, that modify in some way the surrounding environment. This cycle obviously happens over and over again and it is called perception action cycle.

To understand an AI Agent I need to contextualize it, I need to understand how it works in a defined market. One of the most developed markets in this regard is the Finance. Hedge funds and companies managing assets are competing trying to to create a “thinking system capable of learning” that carries out trading operations and adapts intelligently to market conditions, more skillful and faster than any manager in flesh and bones. The Financial Times says: the most successful hedge funds today “hire computer scientists much more than economists and investment experts” and “use the techniques ‘quantitative’ enabled by modern computational and mathematical models”. The AI is just the next step and it will make seem archaic even the existing computing systems for investments which are also fast and ultra-complex. The Warren Buffett of the future may have the form of a super-algorithm. Let us return to the first diagram, in this case the agent will be a Trading Agent and the environment is likely to be the stock market. Through the sensors we can incorporate lots of different information such as the performance of a stock, we can read news and follow some events. Through the actuators we will practice the process of purchase or sale, the control strategy must of course take in consideration many factors that will influence on the final decision. It seems to me quite clear so let’s move to another market and see if I understand it correctly.

Another fascinating field where the AI will always find more space is the gaming industry.

Every time I think of the mix between play and artificial intelligence, comes to my mind Deep Blue, you remember? Deep Blue was the first computer that won a chess game against a reigning world champion, Garry Kasparov. This first victory, Deep Blue — Kasparov, is a famous chess match, played on February10,1996. However Kasparov won 3 games and drew 2 of the following games, beating Deep Blue by a score of 4–2. Deep Blue was then deeply updated (unofficially nicknamed “Deeper Blue”) and in May 1997 he played against Kasparov, winning the rematch (this part I got from Wikipedia).

A few days ago I read that artificial intelligence (AI) beat the man in poker. It is the first time this has happened and this result, called “historic” by the scientists has been around the world, but AI has already succeeded in 1996, no? What is the difference in the two games? To understand this I followed a lesson on the terminology of AI online. One of the first things I learned is that there is a clear distinction between fully or partially observable. An environment is fully observable if our agent is able to fully inspect it, in this case, at any time our agent is able to make a proper decision. It is easy to understand the meaning of partially observable, however, in this scenario comes into play a very interesting factor that I had never considered, memory. When an agent fails to observe the whole environment, it can help with the memory of the previous scenarios. We think of the Black Jack game, the cards are not all observable, but the analysis of previous hands provides an important data in order to make the final decision.

Another very interesting thing I discovered is that the environment has other variables, it can in fact be defined as deterministic or stochastic. Deterministic is when the agent’s actions univocally determine the result, for example in chess there is a randomness when moving a piece. The dice game is rather stochastic, because the result can not be predicted, the randomness plays a key role in this environment. The variables do not end here of course, here’s two more, discrete or continuous environment? A discreet environment is an environment where you have a determinate number of possible choices, and then is measurable, such as occurs in the game of checkers. In the case of a continuous environment, we obviously have an infinite number of possible actions, so it is not measurable, a practical example is the game of darts, there are an infinite number of ways in which angular darts or accelerate them.

Finally we have the variable good or bad, I’ll explain, the environment may be benign or adverse. In the benevolent scenario, the environment could be completely random, so could not have a goal, for example, as long as meteorological factor, it could be challenging, but it is not there to fight. If the adverse scenario, the environment is our opponent, as in almost every game. Now everything is clear, the difficulty coefficients were not the same of course.

It was not an epic moment because the AI was able to bluff, because it has been won against those bluffing.

Andrea Geremicca
Andrea Geremicca

Contributor

Since 2014 he has been part of the Organizing team of TEDx Roma and a visiting professor and mentor at John Cabot University. Andrea writes in his articles the impacts of exponential technologies on our society.

read more