A definition uneasy to build and share

While the term "artificial intelligence" (AI) has entered the common language and become trivial in the media, there is no really shared definition.

In the broadest sense, the term refers indistinctly to systems that are pure science fiction (so-called "strong" AIs with a self-aware form) and systems that are already operational and capable of performing very complex tasks (face or voice recognition, vehicle driving - these systems are described as "weak" or "moderate" AIs).

This confusion contributes to purely speculative fears (autonomous, conscious AI opposed to humans) that would remain anecdotal if they did not disturb the measurement of real issues, such as the impact on fundamental rights of decision-making processes based on mathematical models, and therefore difficult the development of regulatory frameworks.


A definition to be restricted, on a case-by-case basis, to the technologies used

AI is actually a young discipline of about sixty years, which brings together sciences, theories and techniques (including mathematical logic, statistics, probabilities, computational neurobiology and computer science) and whose goal is to achieve the imitation by a machine of the cognitive abilities of a human being.

Specialists generally prefer to use the exact names of the technologies actually used (which today are essentially machine learning) and are sometimes reluctant to use the term "intelligence" because the results, although extraordinary in some areas, are still modest compared to the stated ambitions.


What is machine learning?

After two periods of strong development (between 1940 and 1960, then between 1980 and 1990), AI experienced a new bloom in 2010 thanks to machine learning algorithms. Two factors are at the origin of this new craze among researchers and computer industries: access to massive volumes of data and the discovery of the very high efficiency of simple computer graphics card processors to accelerate the calculation of learning algorithms.

The current "revolution" in AI therefore does not come from a discovery of basic research but from the possibility of effectively exploiting relatively old foundations, such as Bayesian inference (18th century) or formal neurons (1943) for one of the subclasses of automatic learning, deep learning.

Machine learning is a turn from the previous generation of AI, expert systems, with an inductive approach: it is no longer be a question for a computer scientist to code rules by hand but of to let computers discover these rules by correlation and classification, on the basis of a massive amount of data. In other words, the objective of machine learning is not really to acquire already formalized knowledge but to understand the data structure and integrate it into models, in particular to automate tasks.


Expert systems and automatic learning: two different AI designs

 

 

Very concretely, in its learning phase, the machine will search for links between data previously selected for a specific domain (for example, over 10 years, in different cities, the number of ice creams sold and air temperature) and categorize them. This model can then be used to solve questions such as: if it is 25 degrees in the air, how much ice cream can I expect to sell in such a place?

Although some systems are able to construct models in a relatively autonomous way, human intervention is still essential, whether it is to choose the learning data, to identify their possible biases or then, when possible, to distinguish among the correlations those that could actually be the cause of a phenomenon (if one sells much more ice for a given place, is it because of the temperature or the presence of a very good ice cream maker?).


The future of AI and its challenges

According to some experts such as Yann LeCun, an AI researcher and pioneer of deep learning, the ambition to achieve imitation of human (or even animal) cognition would require new discoveries in basic research and not just an evolution of current automatic learning technologies. Such technologies, which are essentially mathematical and statistical in nature, are not able to act intuitively or model their environment quickly.

The impacts on society, ethics and fundamental rights are therefore not to be built by fearing that automatic learning will give rise to an artificial form of consciousness within 10 or 20 years, but by preventing bias, discrimination, attacks on privacy, freedom of expression or conscience or even on life itself with autonomous weapons coming from a conception of society reducing it to a mathematical model.

www.coe.int/ai

Towards an application of AI based on human rights, the rule of law and democracy

    

#COE4AI 

 Contact us