Higher Education and Research

Forum “The legitimacy of quality assurance in higher education”,
Council of Europe, Strasbourg, 19-20 September 2006
Quality assurance in European higher education:
from adolescence to adulthood1
Luc Weber, former Rector of the University of Geneva, President of the CDESR
1. Introduction
Higher education and research are changing radically in Europe today. The outstanding feature is undoubtedly the Sorbonne-Bologna process (1998, 1999, 2001, 2003 and 2005), a joint effort by 45 European countries to create a European higher education area, facilitating student mobility and turning European diversity into a genuine asset. People familiar with the project know that it rests upon ten pillars, the two best known being the division of university studies into three cycles, bachelor’s, master’s and doctorate, and the use of a uniform system of credits (European Credit Transfer System – ECTS) to measure students’ progress. A third pillar - the attention which the generalisation of assessment or accreditation has focused on the quality of institutions of higher education and their activities – is becoming increasingly important at present. Ratification and application of the Council of Europe–UNESCO Convention on the Recognition of Qualifications (1997) is also relevant here.
But the upheavals affecting higher education and research in Europe do not stop there – far from it. Globalisation and the dazzling progress of science and technology are having two decisive effects:
An increasingly competitive climate, which primarily affects businesses and individuals, but is now impacting on universities too: there is growing competition between traditional universities, which themselves face a growing challenge from new-style institutions (remote-study and/or transfrontier universities, private universities, firm-based universities).
In the face of fierce competition from the emergent economies, such as those of China and India, which are able to produce at very low cost, and also, thanks to improved education, break new ground, the developed countries must – if they want to preserve their privileged living standards – exploit their know-how assets to the full. Indeed, this situation already exists in Europe itself, as a result of reindustrialisation of the countries of central and eastern Europe. This competition represents a major challenge for the governance and leadership of institutions of higher education, and particularly universities. To meet it, and contribute effectively to the knowledge market through their teaching and research, universities must be largely independent of the state and private sponsors – which implies, conversely, that they must be effectively governed and led, and themselves pay scrupulous attention to the quality of the services they provide.
In other words, radical change, the Bologna process and the new competitive climate have now made the concept of quality, to which we shall henceforth apply the generic term “quality assurance”, and which has long been omnipresent in the field of research, one of the key themes in the present debate on higher education policy. Its belated arrival on the scene is surprising, since quality has long been regarded as a vital concept in all systems for the exchange of goods or services. Commercial systems (essentially private sector) and non-commercial systems (essentially public sector) both function better when the quality of goods, services and production factors is supported by effective penalties and rewards. In the commercial sector, penalties and rewards are impersonal, market-determined and essentially reflected in sales. In the public sector, where buying and selling are not generally part of the picture, they are indirect, and chiefly reflected in political support. Looking at this question in connection with higher education is particularly interesting since, although nearly all universities are public institutions, a private sector has been emerging in the last 15 years or so, chiefly in the countries of central and eastern Europe.
The co-existence of a public and a private system raises a whole series of questions relating to public responsibility and governance, both of the system and of institutions. This is why the Council of Europe’s Steering Committee for Higher Education and Research (CDESR) has organised two forums and published two books discussing and summarising the present position on these two issues, i.e. a forum on “Public responsibility” in autumn 2004 (Weber and Bergan) and another on “Governance” in autumn 2005 (Kohler and Huber). This is also why, since quality assurance is becoming a key element in public responsibility and governance, it is now organising a third forum on “The legitimacy of quality assurance in higher education”.
This paper is divided into two parts. In the first (2), we shall try to indicate why quality assurance is so important in higher education today, considering it first from the standpoint of public authorities and then from the standpoint of institutions of higher education. In the second (3), we shall try to decide how public authorities and institutions can best discharge this shared responsibility. In the light of what has gone before, we shall then conclude (4) by considering the types of quality assurance which are likeliest to help improve the quality of higher education, and so bring it to maturity.
2. Why
2.1 Public responsibility
The above-mentioned forum in autumn 2004 (Weber and Bergan, 2005) unequivocally confirmed that higher education and research are the responsibility of government. There are at least two reasons for this:
Higher education extensively benefits the whole community, including people who have not themselves been to university. It is the key element in the information society, and an increasingly important element in economic development, but it also contributes to the social and cultural enrichment, and the cohesion and sustainability2, of nations and the world at large,
Governments must ensure, in accordance with the United Nations Declaration of Human Rights, that all those who are capable of benefiting from higher education have access to it (United Nations, 1948, UNESCO, 1998). They must eliminate barriers to access which are rooted in discrimination on grounds of gender, skin colour, religion or race, or connected with financial capacity. They must also remedy the lack of information on the benefits of higher education suffered by groups who have previously had no access to it. In other words, they are expected, not only to provide higher education, but also to finance and partly or wholly produce it.
Governments should pay attention to the quality of higher education and research for at least three reasons:
Because they spend large sums on it,
Because there is no automatic and effective system of penalties and rewards in the public sector,
Because they need to join in the communal effort to establish the European higher education and research area.
These were the considerations which led the Ministers of Education involved in the Bologna process to insist, at their meetings in Prague (2001), Berlin (2003) and Bergen (2005) and in the associated communiqués, on the vital need for quality assurance in European universities.
In other words, no one denies that governments are responsible for the quality of the institutions of higher education which they supervise. But does that responsibility stop there, or extend to other institutions too? A look at the higher education on offer in various parts of the world makes it clear that government involvement is not a sine qua non: nearly everywhere, except – for the moment – in western Europe, there is an enormous increase in the number of private, profit-making bodies, which sell their services to students/consumers. In principle, the state puts no money into these private institutions – but does this mean that it can ignore what they do and how they do it? Opinions and practices differ from country to country, but governments are increasingly showing a desire to monitor the quality of these institutions too, chiefly for the purpose of protecting students/consumers. This is consistent with the economic wisdom which leads them to monitor and regulate other private activities; their aim is to guarantee healthy competition and ensure that the quality of services - not easily judged by non-specialists - is at least acceptable.
2.2 The need for quality in institutions of higher education
This reminder of governments’ responsibility for quality assurance in higher education may suffice to justify quality assurance, but it still leaves a great deal unsaid. To understand the real issues and select the best methods, it is essential to realise that quality assurance is vital for the institutions themselves. We may cite two arguments here:
The first concerns the independence of institutions of higher education. Looking at the general history of universities and at the factors which determine their individual excellence, we can see that the best are nearly always those which enjoy considerable independence. It is this which allows them to adopt a proactive or entrepreneurial stance and escape a classic vicious circle, in which checks on independence, more upstream supervision, political micro-management and the multiform external (usually cyclical) pressures which affect numerous universities in continental Europe all combine to sap their dynamism and reduce their sense of internal responsibility. Instead of taking the initiative, they respond when prodded, which makes government feel that it needs to play a bigger part – and inevitably puts them even more on the defensive. In short, restrictions on the autonomy of universities – even those honestly intended for their own good – reduce their quality, instead of improving it.
Most European universities are seriously short of funds, chiefly because the absolute increase in public funding falls a long way short of cushioning the financial impact of rising student numbers (in itself, a very positive development). This means that even more emphasis must be laid on direction and management of universities, the aim being to ensure that they respond as effectively as possible to the most pressing needs.
Having said that, we have to decide whether universities are sufficiently well governed and run to justify the autonomy they demand and meet the challenge of under-funding. University staff, and particularly teaching staff, seem convinced at all events that the co-management system – in which they hold a dominant position, even though the participation of students and other groups is institutionalised – guarantees optimum quality of teaching and research. It is true that lengthy training and the stiff competition they face when being appointed, and later when seeking research grants and getting papers published in leading journals, are serious guarantees of their ability and desire to operate effectively. Also relevant is the fact that universities can meet new requirements when appointing new staff. Nonetheless, although this very decentralised system allows universities to adjust to a constantly changing environment, the question remains: does it ensure that they adjust sufficiently? Here, doubt is permissible. For one thing, there are various factors which make adjustment hard, when people are left entirely to their own devices (Weber, 2006a and b). For another, existing systems of university governance are rarely conducive to strategic decision-making, and the people in charge – even outstanding academics – are not always natural leaders or able to exercise genuine leadership. This being so, we can safely say that the quality of most institutions is lower than it could or should be.
There are two opposing viewpoints on this situation and its effects:
Governments conclude – not unreasonably – that it is unacceptable, and feel obliged to intervene and compel universities to do better.
Universities need to realise that, with competition increasing and resources declining, they stand to gain by taking themselves in hand and improving their performance. They also need to realise that failure to do this may prompt government to step in and do it for them, using methods which they may regard as inappropriate, or even positively harmful. In other words, they need to develop, in their own best interests, a genuine, pervasive culture of quality. Moreover, the more independent an institution is of its supervising authority, the more it needs a rigorous quality assurance system - and this, we should remember, depends on sound governance, leadership and management.
3. How
The points we have made, and the arguments we have used, in the first section make it clear that quality assurance – a generic term – is a necessity. First of all, it is an essential task for government, given the importance of higher education for society, the climate of confidence required by the Bologna process and the need to regulate private provision. Secondly, it is directly in the interest of universities themselves, which have everything to gain by using their funds to optimum effect, and are usually unable - because of their very special character - to act in the manner which would benefit them most. Having clearly shown that quality assurance in universities is necessary, from the standpoint of public responsibility and of university governance, we must now consider what we can do to ensure that the efforts made along these lines produce real improvements, and to minimise their harmful side-effects. This is a delicate question and, to answer it, we need a sound grasp of what an institution of higher education, and particularly a university, is.
3.1 The special character of institutions of higher education
Institutions of higher education, and particularly universities, are unique human institutions, if only because they are among our oldest. They are chiefly special in what they do, and in their ways of doing it. Institutions of higher education, and particularly universities:
are repositories of human knowledge, and have the task of transmitting the most useful and/or recent knowledge to their students and, even more, teaching them to learn, i.e. encouraging them to stay curious and equipping them to keep track of future developments in their own fields;
are the place where research generates new knowledge and, by sharing it, help to ensure that it benefits society. They also have a near-monopoly of training for young researchers;
use their knowledge and methods to benefit society by subjecting its problems to fully independent and scientific scrutiny, and disseminating human knowledge as broadly as possible.
In other words, institutions of higher education and universities have a major responsibility to the communities and public organisations which fund their teaching and research, the individuals and firms which support them directly, and the students who follow their courses. Their debt to all these groups is considerable, and they have a duty to provide high-quality teaching and do high-quality research, and also serve the community.
The special character of institutions of higher education can be brought out even more clearly by looking at the nature of the services they provide. It hardly needs saying that these have little in common with the services provided by other public or semi-public bodies which are subject to regular assessment, e.g. public transport companies.
On the teaching side, the vast amount of knowledge which even limited fields embrace today obliges them to strike a balance between transmitting “factual” or “pre-digested” knowledge, training people to teach others, and transmitting concepts and methods which go a long way beyond factual or vocational knowledge. The knowledge acquired by the time students graduate is not easily measured, since the quality of an education is largely determined by the individual’s learning capacity, and appears in what he/she does with it in the early years of a subsequent career. In other words, if an institution is assessed on the knowledge acquired by students at a given point in their studies, the result will partly depend on factors over which it has little control.
Assessment of research faces similar difficulties. Of course, it seems easy to measure the effectiveness of a research project by comparing its results with those expected and/or considering the impact of the publications it generates. But how do we assess an ambitious project which produces results totally different from those expected? Also important is a project’s innovative character – a longer-term thing, and so far harder to measure. And how can we assess research done by philosophers, literary theorists or mathematicians, who spend months reading and thinking, need no extra funding, and finally set out their conclusions in sometimes very short publications?
3.2 Quality assurance – the adolescent phase
Although the first quality assurance initiatives were taken some 20 years ago, when quality agencies were established in countries like the Netherlands, England and France, we have no hesitation in saying that quality assurance in higher education is still at the adolescent stage. One proof of this is the broad range of terms still applied to specific approaches, of which the following, non-exhaustive list (Vlasceanu and Co., 2004) gives a fair sample: accreditation, quality assessment, quality audit, quality assurance, licensing, certification, ranking, classification (Carnegie), benchmarking, quality control, culture of quality, descriptors, “summative” and “formative” assessment, quality evaluation, evaluation by students, standardisation, total quality management, qualification, recognition, (quality) review, standards, ISO standards, etc. Moreover, the procedures associated with them apply to institutions, curricula, sub-divisions (faculties, departments), subject areas, courses, research projects – and the list continues.
Taken overall in Europe, this situation can well be termed chaotic, and its effects are very negative, if not seriously harmful.
Ÿ It does not work: In accreditation systems, experience shows that only a very small minority of institutions or courses fail to make the grade, and that assessment, when provided for, usually has little effect, since its conclusions are ignored.
Ÿ The cost-benefit ratio is unsatisfactory: Quality assurance is invariably costly, particularly when it is based on a self-assessment report by the institution and inspection by experts. An institution which takes self-assessment seriously commits substantial resources (particularly working time) to it, and outside experts are expensive. And if the results of the exercise are useless to the institution, or the institution ignores them, the situation becomes completely unsatisfactory.
Ÿ It encourages institutions to think strategically, and bureaucratises assessors: Some types of assessment prompt institutions to react strategically, highlighting their strengths and concealing their weaknesses, instead of facing up to the latter and working on them. As for the assessing agencies or institutes, their determination to be objective may lead them to adopt a bureaucratic stance, treating set procedures and predetermined criteria as more important than the actual assessment. Moreover, as with accreditation, where assessment ultimately sets out to penalise, unequal treatment is a danger, since the line between compliance and non-compliance with the criteria is a very thin one for institutions which are already at the bottom of the list. The ultimate danger is that the results may be arbitrary.
Ÿ The spread of quality assurance and certain associated strategies are turning quality assurance itself into a business: As methods become more ambitious and refined, so the number of experts they depend on increases, while those experts become less inclined to work for nothing, simply to help a sister institution. Quality assurance is in danger of becoming a full-scale business enterprise, with all the problems of independence which that inevitably entails.
As we see it, the fact that quality assurance has so far developed so chaotically reflects a lack of adequate research on scientific and managerial foundations. Essentially, we have had a succession of spontaneous and usually political initiatives, launched in response to immediate pressures by authorities which do not always grasp the task’s full complexity. The result is a tendency to reinvent the wheel, i.e. take no account of others’ experience and fail to allow for the very special features of institutions of higher education. This airy assumption that “it’s quite simple really” explains why we have so many different terms and approaches. No surprise, then, that no one is really satisfied, and that countries are constantly reviewing their methods. But it is a surprise that “scientific” institutions can so totally forget to apply scientific methods to the formulation of quality assurance policies.
3.3 Strategic choices in quality assurance
In devising a national quality assurance system, a country has to choose from among a whole range of alternative solutions, and its choice determines the thinking behind the approach it adopts. We now mean to identify and discuss the main options.
1) “Formative” or “punitive”?
Without necessarily differing much in their approach, quality assurance procedures can have vastly different aims. “Formative” procedures are chiefly designed to help institutions or activities to improve their performance. Here, the purpose of assessment is to help them to form a clearer picture of the things they do well, and the things they do less well – and take the necessary ameliorative action. This approach embodies the spirit which feeds into development of a genuine culture of quality.
“Punitive” procedures lead to a decision which, in its simplest form, says whether or not a quality test - whatever its form - has been passed. Accreditation, registration or certification are examples.
Some people may think this distinction a minor one, but it leads the institutions concerned to adopt wholly different attitudes. An institution seeking accreditation will obviously use its best persuasive powers to show how good it is, or how well it satisfies the criteria; it will adopt a strategy of trying to conceal, or at least minimise, weaknesses of which it is aware. The situation with formative assessment is diametrically different. An institution which takes the exercise seriously, and is conscious of its responsibilities, has everything to gain from revealing both its weaknesses and strengths, i.e. conducting a full-scale SWOT (Strengths, Weaknesses, Opportunities, Threats) analysis, while an institution seeking accreditation may well lose out by playing the truth game and putting all its cards on the table.
2) Stated aims or standard criteria?
This second choice raises a problem similar to the first one. A procedure based on standard criteria certainly has the advantage of providing a uniform basis of comparison for assessment of institutions, curricula, etc. At first sight, this would seem to make for equal treatment. But to what extent can different institutions, curricula, sub-divisions, etc. be validly assessed on set criteria? Such assessment may be perfectly valid when the criteria are widely accepted, e.g. division of studies into three cycles, or use of the credit system in the Bologna process, but it becomes highly dubious when scientific content or educational method are the issue, since these are, by definition, non-standard and changing all the time. This is why the alternative approach – basing assessment on aims declared and pursued - is often better suited to universities. Fitness for purpose is, in other words, the focus. The apparent loss of rigour is offset by an approach which emphasises the institution’s effort at self-criticism and the sound judgment of the experts who have to decide whether the things it is doing will allow it to achieve its goals. This new paradigm may seem less satisfactory to start with, but is rendered necessary by the inherent nature of the services which universities provide.
3) Qualitative or quantitative?
Measuring all the relevant criteria and rating quality as a percentage of a maximum, perfect score would seem the surest path to objective assessment. Obviously, this kind of rigour would be ideal – if it were possible. The problem here is that the realities of higher education are not easily reduced to figures. Of course, there are many data, such as student and graduate numbers, surfaces, funds, books, publications, etc., which can indeed be measured, broken down into categories and sub-categories, and used to generate a whole series of arithmetical ratios which can then be used to gauge specific forms of efficiency and/or facilitate comparisons.
The real situation is more complex, however, and quantification of this kind can give a dangerous impression of accuracy. The main problem is that many of the things measured (the indicators) are not homogeneous or sufficiently relevant. For example, in measuring staff/student ratios, one would have to make distinctions based on subject area, level of study, duration of course, degree obtained, origins of students, etc., and make appropriate distinctions for teaching staff as well. In the same way, better results at the end of the first year may be due to a lowering of standards – which may bump up the failure rate at the end of the second year. The fact is, using numerical data to compare institutions can easily lead to false conclusions.
To measure academic output, we need to consider the quality of publications or their impact, both immediate and longer-term. To assess a library’s value, simply counting the books is clearly not enough: we need to consider whether they are still useful, and whether they are easily accessible. Similarly, to assess the quality of a specific course, it is not enough, for example, to look at the percentage of graduates who find jobs within six months – we must also consider the quality of those jobs, and see what graduates of five years’ standing are doing today. The champions of quantitative assessment are obviously aware of these difficulties and are constantly refining their measurement procedures. With proper resources, this is perfectly possible in certain areas - but becomes very difficult once we try to establish an arithmetical ratio between two quantities (indicators) and use it to assess academic content, e.g. effectiveness of courses, or include the temporal dimension.
4) The institution or an agency?
The first quality assurance procedures were mainly the brain-child of governments, which set up quality agencies, usually attached to government departments, or simply launched ad hoc assessment schemes. Attention focused chiefly on individual institutions, academic output, curricula or the level in specific subject areas, and the aim was comparison or accreditation. Assessment was primarily external, and institutions themselves played only a minor part in the process. One consequence of this, which we have already mentioned, was that institutions bent over backwards to make the best impression and took little notice of the findings afterwards - unless, of course, they were denied accreditation.
This explains the current trend towards maximum involvement of the people responsible for institutions, curricula, etc. The subsidiarity principle – that decisions must be taken and implemented on the lowest level where their effectiveness is certain - is cited in support of this. Indeed, this was the attitude adopted by the Bologna process ministers, when they declared in their Berlin communiqué (2003) that “consistent with the principle of institutional autonomy, the primary responsibility for quality assurance in higher education lies with each institution itself”. The first stage of quality assurance should be entrusted to institutions, not only because developing a culture of quality is in their interest, but also because they are best equipped for the task. In practice, they should themselves assess quality of teaching, teaching and research units, administrative services (faculties, departments, services, etc.) and curricula, using the best method in each case. Courses should be assessed mainly by students, and curricula, teaching and research units, and administrative services (e.g. student services) be assessed via a three-stage procedure: self-assessment report, inspection and report by independent experts, and rigorous follow-up action on the conclusions.
Institutions and supervising authorities would be wrong to assume, however, that institutions themselves will necessarily make the best job of assessment. For this reason, quality assurance procedures carried out by institutions should themselves be assessed at regular intervals by national or international agencies, or agencies specialising in specific subject areas. The procedure itself would be very much the same: self-assessment report, inspection and report by experts, and follow-up action on the findings.
In systems of this kind, which follow the subsidiarity principle, institutions clearly cannot assess themselves; in principle, they should again be assessed by an external agency – either the same or another specialising in institutional assessment. It is vital that assessment should not stop at internal quality procedures, but should also look at an institution’s ability to change, i.e. act on a strategic vision and on the conclusions of assessment.
Taking things to their logical conclusion, it is clear that these agencies must themselves be assessed, and probably accredited or approved. This is even more important when assessment is a commercial operation; for then worst and best may lie close together. If we are serious about a European higher education area, then we must also accept a situation in which countries (not just small countries, where setting up national agencies is impractical) and institutions are willing to be rated by foreign agencies. At their meeting in Bergen (2005), the Ministers opted for a register of approved agencies. This question is discussed elsewhere in this book; we shall merely say that this is another policy initiative which may be hard, if not impossible, to implement, and that there is a danger that many countries may not take decisions approving agencies seriously.
Obviously, governments have a duty to ensure that agencies monitor the procedures which institutions use to assess their own quality - and also assess institutions overall. It is also important that governments, governmental organisations, and associations of institutions of higher education and students should conclude a formal agreement on assessment and approval of these agencies.
5) Other questions
Obviously, there are other questions which need answering when quality assurance systems are being planned or adjusted at international, national or institutional level. Since space is at a premium and other contributors will be discussing them, we shall mention only four of them here. We need to:
Determine whether assessment findings affect institutions financially and, if so, how: do they reward quality or help institutions to make the necessary improvements? We have actually answered this question earlier: “formative” assessment, i.e. assessment which has nothing to do with penalties, accreditation or funding, is the only kind which provides a basis for objective commitment to improvement. Having said that, growing competition also makes it essential to match funding to performance, at least in the research field.
Guarantee the independence of assessment agencies, which must not be answerable to governments or universities, and ensure that the various viewpoints get an equal hearing. But is this really possible in a situation where agencies – since universities cannot afford to pay them - are funded almost totally by governments?
Guarantee the experts’ independence. At first sight, finding independent experts might seem easy, but things may become more complicated once the question of payment for the very considerable work involved arises.
Decide whether the assessment findings should be published. The aim of transparency suggests that they should be, but this inevitably leads experts to phrase their conclusions far more “diplomatically”, particularly when individuals are affected.
3. Conclusion – towards maturity
Obviously, the space available in this book does not allow us to cover all aspects of establishing a culture of quality at European, national and institutional level. We hope, however, that the points we have made on the legitimacy of quality assurance, and on the main choices which the very special character of institutions of higher education obliges us to make, will contribute to the development of more sophisticated approaches. The need for a culture of quality in higher education is undeniable. It is rooted in governments’ responsibility for higher education - a responsibility recently given a new dimension by the Sorbonne-Bologna process, which depends, among other things, on greater mutual trust between institutions of higher education in Europe. It is also rooted in the duty which every institution has to do its job as well as it can. This is particularly true of universities, which insist on having (and actually have) a large measure of independence, and regard that as essential to fulfilment of their task in a radically changing world. We should remember that, unlike the market, where penalties and rewards are automatic and effective, the public system has to use special instruments to reward good performers and penalise bad.
Twenty-five years of trying, not always successfully, to bring quality assurance into higher education and research have given us considerable experience, both positive and negative, and this helps us to clarify the picture. In this conclusion, we shall try to draw some practical lessons from our discussion of the methodological choices we face when planning quality assurance systems at institutional, national and international level.
Let us start by repeating that:
the procedure should match the exceptional complexity of institutions of higher education and the services they provide,
it should be more formative than punitive,
it should be focused on the future, and particularly institutions’ capacity for change,
it should respect the subsidiarity principle,
it should mobilise institutions and the various groups within them,
its costs should be in line with the benefits it can reasonably be expected to provide,
the experts appointed, either individually or within agencies, should be independent,
the services provided on one level should be monitored by a body on a higher level,
stated aims, and not standard criteria, should be the basis of assessment,
appraisal by experts should be preferred to quantitative measurement, while recognising that carefully formulated indicators are useful,
These criteria are considered important for effective quality assurance. They were the basis of the approach adopted over ten years ago by the European Rectors’ Conference, which today constitutes the beacon programme of the European University Association (EUA). The EUA’s institutional assessments (over 150 so far in European and some non-European countries) strike a fair balance between spontaneous commitment on the part of institutions (self-assessment reports) and outside experts’ contributions. However, since institutions themselves commission these assessments, nothing obliges them to act on the findings - which is certainly the main weakness of this approach. Things are different when, as has happened several times in recent years, governments or government departments ask the EUA to assess institutions.
The fact that the standards and guidelines for quality assurance in the European higher education area (2005), formulated by the European Association for Quality Assurance in Higher Education (EMQUA) with its partners3 as part of the Bologna process and at the request of the Ministers of Education, are infused with the same spirit augurs well for the development of a quality assurance strategy which both matches the nature of universities and is effective. Moreover, dynamic countries like Ireland organise quality assurance on these same principles, giving institutions extensive responsibility for regular assessment of their own faculties or departments, and having their procedures assessed by an outside body, e.g. the EUA in 2005 (site of the Irish Universities Quality Board – IUQB). Finally, we may note that a number of European universities have in recent years spontaneously devised internal faculty assessment systems, based on self-assessment and inspection by experts.
We may also note the growing practice of assessing (some say accrediting) internal quality assessment procedures. However, the danger here is that quality measures may be seen as an end in themselves, having no connection with the institution’s strategy and implementation of that strategy, i.e. its response to the challenge of being able to change. The point should also be made that a few institutions find benchmarking useful. Assessment of teaching by students (practised systematically for a considerable time in some countries, not easily introduced in others) can also be very instructive, provided that questionnaires are well designed, and that those in charge of institutions – usually deans – take action on detected failings.
Our earlier discussion of the strategic choices involved in selecting an assessment method suggests that accreditation systems may be open to some reservations. They may be broadly justified to protect consumers in the case of private institutions, but they must be flexibly applied - particularly to ensure that institutions which fail on one criterion, but score well on the others, are not refused accreditation. One interesting use of accreditation involves penalising institutions which fail to attain a certain quality level, e.g. the EQUIS (European Quality Improvement System) which applies to business schools. This gives institutions a further incentive to improve; nonetheless, it should not be substituted for formative assessment. The main reservation here concerns countries which subject all teaching programmes to accreditation. Under the subsidiarity principle – invoked by the Ministers of Education in Berlin (2003) – programme assessment should be the responsibility of the institutions themselves. More seriously, very few programmes are ever refused accreditation, which means that the system costs more than its results warrant. Accrediting whole institutions is still more questionable. This is probably justified for brand-new institutions, but certainly not for those which have been in place for decades, or indeed centuries - provided that outside agencies monitor their internal quality procedures and/or assess their capacity for change. This is an area where non-discriminatory and intelligent solutions must be found.
Let us hope that this chapter, which is the work of an academic, and not an assessment specialist, will convince the sceptics that developing a culture of quality is essential, and also convince the perfectionists that institutions of higher education are complex, but generally mature entities. This being so, we must let them do the job they were meant to do – but not be afraid to subject them to regular professional scrutiny, so that they can remedy their failings.
Berlin Communiqué (2003) Realising the European Higher Education Area,
Bergen communiqué (2005)
Berlin Communiqué (2003)
Bologna Declaration (1999)
ENQA, The European Association for Quality Assurance in Higher Education, http://www.enqa.eu/
ENQA (2005), Standards and Guidelines for Quality Assurance in the European Higher Education Area,
EQUIS, The European Quality Improvement System of the European Foundation for Management Development,
ESIB, The National Unions of Students in Europe,
EUA, The European University Association,
EUA (2005) Review of Quality Assurance in Irish Universities, EUA Institutional Evaluation Programme, Brussels,
EURASHE, European Association of Institutions in Higher Education, http://www.eurashe.be/
IUQB, Irish Universities Quality Board, http://www.iuqb.ie/
Kohler, J. and Huber, J. (eds.) (2006) Higher education governance between democratic culture, academic aspiration and market forces, Council of Europe Publishing, Strasbourg
Perellon (2003) La qualité dans l’enseignement supérieur, Le Savoir suisse, 2003
Prague Communiqué (2001)
Sorbonne Joint Declaration (1998)
UNESCO-Council of Europe (1997) Convention on the Recognition of Qualifications concerning Higher Education in the European Region, Paris-Strasbourg
UNESCO (1998) World declaration on higher education for the twenty-fist century: vision and action, Paris
United Nations (1948) Universal Declaration of Human rights, New York, http://www.un.org/Overview/rights.html
Vlasceanu and Co. (2004) Quality Assurance and Accreditation: A Glossary of Basic Terms and Definitions, UNESCO/CEPES, 2004
Weber, L. and Bergan, S. (eds.) (2005) The Public responsibility for higher education and research, Council of Europe Publishing, Strasbourg
Weber, L. (2006a) “European university governance in urgent need of change” in Kohler & Huber (eds.) pp. 63-75, Higher education governance between democratic culture, academic aspiration and market forces, Council of Europe Publishing, Strasbourg
Weber, L. 2006b) “University Governance, Leadership and management in a rapidly changing environment” in Purser (ed.), pp. ..-.. Understanding Bologna in context, European University Association and Raabe Academic Publishers, Brussels and Berlin
Examples of practice
Patricia Georgieva
Fergal Costello
Introductions to discussion groups
Jürgen Kohler
Andrejs Rauhvargers
Peter Williams
The use of outcomes of quality assurance
Norman Sharp
Introduction, Luc Weber, Chair of the CDESR
Written contributions

1 Draft prepared for the forum on 19-20 September 2006; do not quote.

2 When we speak of a “sustainable” social system, we mean a system which respects and applies a whole series of social values, such as democracy, respect for human rights, legal settlement of conflicts, tolerance, and fair distribution of the amenities of life, thus ensuring that the tensions inherent in any social system do not augment to a point where the system itself is endangered.

3 The European University Association (EUA), the European Association of Institutions in Higher Education (EURASHE) and the National Unions of Students in Europe (ESIB).