|Steering Committee (CDMSI)|
|Bureau of the Committee (CDMSI-BU)|
|Former Steering Committee (CDMC)|
|Former Bureau of the Committee (CDMC-BU)|
|Rights of Internet Users|
|Legal and Human Rights Capacity Building|
|FORMER GROUPS OF SPECIALISTS|
|Public Service Media Governance|
|Protection Neighbouring Rights of Broadcasting Organisations|
|Public service Media|
Conference Freedom of Expression and Democracy in the Digital Age -
Opportunities, Rights, Responsibilities, Belgrade, 7-8/11/2013
Conference "The Hate factor in political speech - Where do responsibilities lie?", Warsaw18-19 September 2013
Conference "Tackling hate speech - Living together on-line", Budapest 27-28/11/2012
|Conference of Ministers, Reykjavik - Iceland, 28-29 May 2009|
|European Dialogue on Internet Governance (EuroDIG)|
|Committee of Ministers texts|
|Parliamentary Assembly texts|
HUMAN RIGHTS IN THE INFORMATION SOCIETY:
RESPONSIBLE BEHAVIOUR BY KEY ACTORS
Palais de l'Europe, Strasbourg, France
12 and 13 September 2005
The discussions during the Forum demonstrated the need for:
(i) Greater protection and promotion of human rights in the Information Society based on a multi-stakeholder and multi-layered approach.
(ii) Development of a deeper understanding of how children and young people interpret and respond to perceived risk of harm from online and related offline activities within the context of their everyday lives and their utilisation of new and emerging social networking and communications technologies.
(iii) Human rights “proofing” of all key actions, decisions and technologies influencing the Information Society.
(iv) Increased provision of media literacy initiatives to minimise the risk of harm from online and related offline activities (intimately linked with member State responsibilities to protect and promote human rights under the European Convention on Human Rights), coupled with better use of civil society and the media to develop clear systems of evaluation of the efficacy of educational initiatives.
Dr Rachel O’Connell and Dr Jo Bryce,
University of Central Lancashire, United Kingdom
As a follow-up to previous Council of Europe Forums concerning the Information Society1, and as part of the implementation of the Council of Europe intergovernmental work of the Group of Specialists on Human Rights in the Information Society (MC-S-IS), this Forum was organised to discuss the roles and responsibilities of key stakeholder groups in protecting and promoting human rights in the Information Society. This was based, in particular, on the Declaration of the Committee of Ministers on Human Rights and the Rule of Law in the Information Society, of May 2005, and with reference to the European Convention on Human Rights, of November 1950.
Approach and objectives of the Forum
The Forum focused on online environments and issues relating to the right to privacy and freedom of expression balanced against the protection of human dignity and the well-being of persons, with particular reference to the protection of children and young people and the prevention of criminal activities2.
The need for balance between competing rights and freedoms is ever more necessary within a rapidly evolving technological and economic context at both the member State and European/international level. The development, marketing and use of interactive content and services must be considered in relation to the challenges and opportunities they pose for the protection and promotion of human rights in the Information Society.
One of the key objectives of the Forum was to explore the parameters of the multi-stakeholder approach outlined in the Declaration above mentioned by bringing together key stakeholder groups from the Information Society industry, member States, civil society and the media, in order to promote dialogue, collaboration, and the sharing of best practice. In this respect, the panel sessions presented clear examples of existing actions and mechanisms implemented by various key stakeholders to protect and promote human rights. It also emerged that there is a need to evaluate the efficacy of these actions, and to consider their evolution in the context of technological and economic change.
The implications of user generated content and blogging vis-a-vis human rights in the Information Society was a significant theme throughout the Forum.
New communication services and technologies now enable users to be creators, producers and distributors of content, communications and services. This significantly alters how we process information and the development of a European and global participatory society. Technology allows users to immediately access and share (almost in real time) the most recent information and images of events (e.g. London bombings in July 2005).
Whilst these changes have significant implications for the development of e-democracy and participatory societies, they also have implications for child protection and the prevention of crime. Examples of potential opportunities, risks and challenges associated with blogging were outlined in the discussions and the regulatory challenges posed by user-generated content were underlined. Particular reference was also made to the feasibility of direct state or legal regulatory intervention in the domestic spaces and private lives of users using services afforded by new technologies.
Key issues regarding the distribution of regulatory responsibility for user-generated content were discussed, including the possibilities for deputising key industry actors to develop codes of practice. Whilst there was much debate over the human rights issues associated with such a delegation of responsibility from the state to key industry players, education and literacy initiatives in the Information Society were repeatedly highlighted as important elements of the strategy to manage user-generated content.
There was also discussion on the role of public service broadcasters in informing and educating users about human rights within public online spaces. It was suggested that Council of Europe standards should form the basis of such initiatives, and that member States should not subject online content to restrictions which go further than those applied to other channels of content delivery3. Public service broadcasters could utilise their existing trust relationship with users to develop commitments towards shared responsibilities in the production and distribution of user-generated content such as blogs.
The centrality of the provision of media literacy initiatives in order to minimise the risk of harm from online and related offline activities was underlined as being intimately linked with member States’ responsibility to protect and promote human rights under the European Convention on Human Rights.
The key skills associated with media literacy were discussed, as well as the importance of integrating existing knowledge and best practice with reference to existing initiatives in various European countries.
These key skills must be supported by programmes of research which examine the context of delivery, teaching methods and learner characteristics. The involvement of children and young people in the production of teaching material and peer mentoring has an important and positive role to play. Education for democratic citizenship refers not only to information about rights, but also the ability to think critically. Given that work of this nature has been ongoing since the late 1990’s, there is a need to ensure that information is shared and that resources are pooled without attempting to ‘re-invent the wheel’.
The involvement of media professionals would also be useful in the production of teaching materials and in the development of clear systems for evaluating the efficacy of educational initiatives. Without built-in evaluative functions it will be difficult to draw true conclusions regarding their efficacy. Such actions contribute to the process of the development of guidelines for promoting media literacy which will require constant refinement in response to changing technological innovations. The central focus of this process should be on users being suitably educated and informed to enable them to assume a greater degree of responsibility for their own access to content and services. This is especially important for children and young people, though it should also be considered as part of a lifelong learning process, which should integrate current theoretical perspectives on learning and teaching.
Traditional programmes of research designed to explore the consequences of exposure to ‘harmful content’ have been restricted in many respects by the conceptualisation of consumers as passive recipients of information. Viewing children and young people in online environments from this perspective is problematic and has hampered progress toward a more holistic understanding of the interrelated factors that underpin children and young people’s involvement, experiences and interpretations of both online and related offline activities. Reconceptualising children and young people as active consumers and producers of online content and communications has significant implications for the protection of their human rights and for associated regulatory and educational policies. In this respect, it was considered useful to develop a theoretical and research based framework which examines the complexities of exploring, measuring and calibrating the risk of harm from online activities and behaviours.
It was proposed that the concept of harmful content is too limited to denote the scope, scale, nature and extent of activities and behaviours that children and young people may encounter or engage with in online or related offline environments. Instead, it was argued that harmful content is more successfully characterised as a sub-category of a much broader category of activities, and that a more inclusive, super-ordinate term is needed which captures the true scope of children and young people’s opportunities to harness the capabilities of interactive communications technologies (i.e. to be creators, producers and disseminators of both content and communications), and to be aware of the potential dangers which flow from them.
A taxonomy of ‘risk of harm’ from online and related offline activities (O’Connell & Bryce, 2005) was presented indicating the range of possible activities and behaviours which may be deemed to pose a ‘risk of harm’ to children and young people, but crucially to situate them in relation to both normal and illegal activities. By contextualising the ‘risk of harm’ from online activities in this manner, the embedded nature of risk within a host of online and related offline activities can be better appreciated.
However, concerns were expressed that such a taxonomy might serve to legitimise censorship and restrictions on freedom of expression. Certain participants considered that the process of categorising various ‘risk of harm’ activities could shift these legal yet arguably harmful activities into the illegal domain. However, grouping various activities according to the categories of legal, risk of harm and illegal can allow a mapping of how ‘normal’ behaviours can, in certain circumstances, be perceived as ‘risk of harm’ activities (e.g. in the context of dieting, when young people access pro-anorexia online forums).
Several participants considered it essential that educators and policy makers explore these issues using a holistic approach which considers how children, young people and vulnerable adults respond to, and interpret, perceived risk of harm from online and related offline activities within the context of their everyday lives and utilisation of new and emerging social networking and communications technologies, as well as traditional media channels.
Striking a balance between the right to freedom of expression, the right to private life, and human dignity
Different key stakeholders expressed a variety of views during the Forum about the challenges of striking a balance between freedom of expression, privacy, human dignity and freedom from discrimination in the Information Society. There was a high level of consensus that the state is, and should remain, the primary guarantor of human rights. However, some participants felt that the state should be responsible only in collaboration with the industry and non-governmental organisations within civil society. This was related to concerns over the role of the state in regulation and its potential for censorship, particularly in less democratic political systems.
The Australian model was highlighted as being a particularly noteworthy example of a responsible and successful collaboration between the state and the Information Society industry in regulating harmful content. This system is based on legislation which requires Internet service providers (ISPs) and other key actors to protect children and young people from harmful Internet content, and to address the associated technological and communicative challenges. Of central importance to the Australian model is the development and provision of the technical tools (filtering) for regulating child access to the Internet, together with the development and communication of information which enables users to decide whether to use the provided tools, and to choose what level of restriction to impose for themselves and/or their children. Overall, it would appear that this is a model which shows successful and effectively working co-regulation between different stakeholders.
Another example of good practice which was discussed is the PEGI game content rating standard, a successful harmonised European co-regulatory framework which enables responsible action by game publishers and developers through the provision of self-rating mechanisms. This system provides a clear and coherent framework within which consumers can make informed purchasing decisions.
The work of the International Foundation for Online Responsibility and the development of the “.xxx” domain for adult entertainment services is another good example of relevant industry actors assuming responsibility for self-regulation and labelling through collaboration with human rights organisations, child welfare advocates and government and enforcement agencies.
Content rating and filtering technologies, as well as the development of peer rating systems, were also identified as useful tools in enabling parental decisions about regulating child access to the Internet. Whilst recognising the limitations of current technical solutions, the participants noted that efforts are ongoing to improve the efficacy of labelling and filtering solutions (e.g. to increase user trust in the technical tools and the role of the semantic web in providing flexible and efficient labelling and filtering tools). This raised a number of concerns in relation to whose role it would be to encourage users to rate content and to what type of mechanism or independent body could exist to evaluate the efficacy of this approach, as well as the role of state and /or industry in relation to protecting end users.
Towards a multi-stakeholder and multi-layered approach to the protection and promotion of human rights in the Information Society
Overall, the Information Society must be respectful of human rights and, to this end, should take a multi-stakeholder and multi-layered approach based on the common values of universal participation, respect, and assumption of all key stakeholders, including users, of responsibility for own actions.
The adoption of such a human rights based approach to the issues raised during the Forum was considered essential. Coupled with the technological drive of the Information Society, there is a need to ensure that there is a growing commitment by all key stakeholders to protecting and promoting human rights.
In this respect, it was considered important to develop human rights “proofing” of all key actions, decisions and technologies affecting the Information Society, and to give priority to the development of educational and literacy initiatives. The need to develop clear guidelines for managing the interests and concerns of key actors in a clear and transparent manner was also underlined.
1 European Forum on “Internet with a human face – a common responsibility” organised by the SafeBorders consortium, in collaboration with the Council of Europe, in the framework of the European Commission’s Safer Internet Action Plan (Warsaw, 26 and 27 March 2004); European Forum on Harmful and Illegal Cyber Content (Strasbourg, 28 November 2001).
2 For example through the application of the Convention on Cybercrime, of November 2001, and the additional Protocol to the Convention on Cybercrime of January 2003, concerning the criminalisation of acts of a racist and xenophobic nature committed through computer systems.
3 Principle 1: Content rules for Internet, Council of Europe Declaration on freedom of communication on the Internet, of May 2003.