Back Focus on responsible AI: a new Council of Europe study draws attention to the responsibility challenges linked to the use of artificial intelligence

Focus on responsible AI: a new Council of Europe study draws attention to the responsibility challenges linked to the use of artificial intelligence

Advanced digital technologies and services today are increasingly enhanced by self-learning and other AI tools. This has created substantial benefits for society and added more efficiency, accuracy and convenience to many of our lives. Yet, public anxiety around AI is also growing. A new Council of Europe study provides a deeper understanding of the impacts of AI development on the exercise of human rights and fundamental freedoms and calls for more careful consideration by lawmakers of questions of responsibility allocation when something goes wrong.

If human rights commitments are to be taken seriously, experts argue, governments cannot allow the power of AI tools to be accrued and exercised without responsibility. Instead, they must ensure that private companies who design, develop and deploy AI, reaping enormous profits, bear the consequences for the adverse impacts on individuals and groups that may follow. For our societies to benefit from AI advancement, the Study calls for effective oversight mechanisms that operate to anticipate and prevent human rights violations and facilitate human-centric innovation.

Karen Yeung on Responsibility and AI

Strasbourg 9 October 2019
  • Diminuer la taille du texte
  • Augmenter la taille du texte
  • Imprimer la page

"Everyone has the right to freedom of expression"

Art. 10 European Convention on Human Rights

follow us