Quality Assessment in Higher Education

This study aim’s to identify the significance of quality assessment in higher education and how assessment is valuable for the betterment of educational process and student learning. As assessment is ongoing process for setting standards of higher education and measuring the progress toward learning outcomes so this study helps us to determine the new teaching methodologies and which innovations are required in this era of global education competition


INTRODUCTION
Assessment is the process of documenting, often times in measurable terms, knowledge, skills, attitudes and beliefs. Assessment plays an important role in the teaching-learning process at all levels of education. Such assessment plays an important and significant part in the future of students; there is no doubt that any assessment system will determine what students learn and the way in which they do this. Hence assessment determines the way in which we teach and what teaching methodologies we use and what innovation we have introduced in the teaching methodologies.
But assessment is not just about grading and examinations. It is also about getting to know the students and the quality of their learning and to use this knowledge and understanding for their benefit. Assessment is without doubt one of the major ''drivers'' of the teaching-learning process. It is thus important for teaching staff to be familiar not only with the technical aspects of the many different forms of assessment currently in use but also with their advantages and limitations and about assessment issues and concerns.
Assessment is an ongoing process of setting high expectations for student learning, measuring progress toward established learning outcomes, and providing a basis for reflection, discussion and feedback to improve University academic programs. It is a systematic and cyclic process that makes expectations and standards explicit and public.

What Assessment is NOT?
 Student's Grading Many assume that a midterm or final examination constitutes assessment. Results of those exams can be used to give feedback to an individual student and add generally to understanding class achievement for that subject. A robust assessment program seeks to understand all influences on student learning including the relationship between student learning, curriculum development and institutional learning outcomes.  Faculty/Course Evaluation Again, there is only a small part of a broad-based objective to support and improve student learning. For example, results of evaluations may highlight ways to improve teaching techniques, ways to redesign a course or course sequence in the curriculum.  Part of an accreditation process It is not just a period of intense activity in preparation for a visit from an accrediting body that subsides when the visit is over. While periodic accreditation visits are often the impetus an institution may need to begin to work toward an assessment program, a healthy institution perpetuates the program and begins to embrace it as a worthwhile endeavor in the name of student learning. It becomes part of the culture of the institution.  Scope & Methodology The different questionnaires forms for the assessment are used in the Higher Education Institutes like:  Self Assessment Report  Faculty Course Review Report  Student Course Evaluation Questionnaire (Pro forma)  Teacher Evaluation Form etc. To assess a department or program, a rubric is created that assesses the goals of the program. The assessment rubric goes beyond A-B-C grades and looks at how well students have grasped the learning goals set by the faculty. For an English program, the assessment rubric would include how well students write, including the development of content and organization of the writing. For a math program, the assessment rubric could include how well students are able to grasp concepts as well as solve specific problems. The final exams are also included in the rubric, in some higher education institute, more to look at how well students grasped the content of the course than for grading. The same model can be used in each class with the faculty member looking at course goals and asking the question: Does the assessment show that students were able to learn and meet the goals I set for this course? Assessing the course and assessing the whole program allows both individual faculty members and department or division chairs to refine and design course material that allow for the maximum learning to all students, both traditional and adult learners. Assessment of individual students must be an ongoing process throughout the semester or quarter, and assessment must be able to measure at higher-level skills.
 Direct and plan for the University assessment process and procedures.  Facilitate and assist University programs for the design and development of appropriate assessment tools.  Summarize assessment data for use in curricular review and improvement.  Coordinate assessment data, results and reports for institutional effectiveness and accreditation.  Recommend and review faculty development activities related to assessment and improvement of student learning.  Communicate to the University community about assessment information and activities.  Represent the University in assessment consortia, councils and other state, regional and national organizations.

ASSESSMENT AND GRADING
Assessment Is More than Just Grading The words "assessment" and "grading" are sometimes used interchangeably, but it is helpful to distinguish between them.
Assessment is something you do every day as you gauge where students are in the learning process. You are assessing your students when you ask them questions, read their homework, and listen to their mathematical conversations. These assessments guide your instructional decisions regarding pacing, teaching strategies, and "where to go from here." Getting an accurate a reading as possible requires that students be observed and assessed in real situations; hence the term authentic assessment, which is used frequently in educational reform.
Assessment should be part of the ongoing educational process and should enhance learning. Unlike the standardized tests, which create a break in learning in order to take a measurement, assessment should be part of the natural flow of the classroom. When the curriculum provides a window into a student's thinking, which is a natural time to assess that student. Such an assessment need not be something you assign a specific grade to--it may be simply for informational purposes, both for you and for the student.

GRADING HOMEWORK
There is not enough time in the day to thoroughly grade every piece of student homework that comes in. Most experienced IMP teachers grade the bulk of homework according to completion. This can be done by stamping the homework or marking it off in your grade book as students come into class. In order to build in more accountability on occasion, you can focus on one particular part of the assignment or ask a specific question to gauge how students did.

GRADING GROUP PARTICIPATION
As you observe groups working, you will be getting insight into how well they are able to share the tasks they are assigned, and can give the group as a whole a grade on its members' ability to collaborate.
You may also find it helpful to have group members grade each other periodically on participation. You might have students do some self-reflection and grade themselves as participants in their groups. Students are typically very honest. In fact, many are too hard on themselves, so you will want to reserve the right to raise self-assigned scores.

GRADING GROUP PROJECTS
Occasionally, you may need to assign grades to projects or investigations done by each group as a whole. The simplest approach is to assign the same grade to each group member. As an alternative, you can give a lump sum to the group and have group members decide how to allocate it. For example, suppose you want to allow each student a maximum of 10 points on a given assignment, which would be a total of 40 points for a four-person group. If the group did B work, you might give them 34 points and have the group divide the total among themselves (and justify their decision).

ASSESSMENT IS INHERENTLY A PROCESS OF PROFESSIONAL JUDGMENT
The first principle is that professional judgment is the foundation for assessment and, as such, is needed to properly understand and use all aspects of assessment. The measurement of student performance may seem "objective" with such practices as machine scoring and multiple-choice test items, but even these approaches are based on professional assumptions and values. Whether that judgment occurs in constructing test questions, scoring essays, creating rubrics, grading participation, combining scores, or interpreting standardized test scores, the essence of the process is making professional interpretations and decisions. Understanding this principle helps teachers and administrators realize the importance of their own judgments and those of others in evaluating the quality of assessment and the meaning of the results.

ASSESSMENT IS BASED ON SEPARATE BUT RELATED PRINCIPLES OF MEASUREMENT EVIDENCE AND EVALUATION
It is important to understand the difference between measurement evidence (differentiating degrees of a trait by description or by assigning scores) and evaluation (interpretation of the description or scores). Essential measurement evidence skills include the ability to understand and interpret the meaning of descriptive statistical procedures, including variability, correlation, percentiles, standard scores, growth-scale scores, norming, and principles of combining scores for grading. A conceptual understanding of these techniques are needed (not necessarily knowing how to compute statistics) for such tasks as interpreting student strengths and weaknesses, reliability and validity evidence, grade determination, and making admissions decisions. Schafer (1991) has indicated that these concepts and techniques comprise part of an essential language for Educators. They also provide a common basis for communication about "results," interpretation of evidence, and appropriate use of data. This is increasingly important given the pervasiveness of standardsbased, high-stakes, large-scale assessments. Evaluation concerns merit and worth of the data as applied to a specific use or context. It involves what Shepard (2000) has described as the systematic analysis of evidence. Like students, teachers and administrators need analysis skills to effectively interpret evidence and make value judgments about the meaning of the results.

ASSESSMENT DECISION-MAKING IS INFLUENCED BY A SERIES OF TENSIONS
Competing purposes, uses, and pressures result in tension for teachers and administrators as they make assessment-related decisions. For example, good teaching is characterized by assessments that motivate and engage students in ways that are consistent with their philosophies of teaching and learning and with theories of development, learning and motivation. Most teachers want to use constructed-response assessments because they believe this kind of testing is best to ascertain student understanding. On the other hand, factors external to the classroom, such as mandated large-scale testing, promote different assessment strategies, such as using selectedresponse tests and providing practice in objective test-taking (McMillan & Nash, 2000). Further examples of tensions include the following.
 Learning vs Auditing  Formative (informal and ongoing) vs summative (formal and at the end)  Criterion-referenced vs norm-referenced  Value-added vs. absolute standards  Traditional vs alternative  Authentic vs contrived  Speeded tests vs power tests  Standardized tests vs classroom tests These tensions suggest that decisions about assessment are best made with a full understanding of how different factors influence the nature of the assessment. Once all the alternatives understood, priorities need to be made; trade-offs are inevitable. With an appreciation of the tensions teachers and administrators will hopefully make better informed, better justified assessment decisions.

ASSESSMENT INFLUENCES STUDENT MOTIVATION AND LEARNING
Grant Wiggins (1998) has used the term 'Educative Assessment' to describe techniques and issues that educators should consider when they design and use assessments. His message is that the nature of assessment influences what is learned and the degree of meaningful engagement by students in the learning process. While Wiggins contends that assessments should be authentic, with feedback and opportunities for revision to improve rather than simply audit learning, the more general principle is to understand how different assessments affect students. Will students be more engaged if assessment tasks are problem-based? How do students study when they know the test consists of multiple-choice items? What is the nature of feedback, and when is it given to students?

International Letters of Social and Humanistic Sciences Vol. 50
How does assessment affect student effort? Answers to such questions help teachers and administrators understand that assessment has powerful effects on motivation and learning. For example, recent research summarized by Black & Wiliam (1998) shows that student selfassessment skills, learned and applied as part of formative assessment, enhances student achievement.

ASSESSMENT CONTAINS ERROR
Teachers and administrators need to not only know that there is error in all classroom and standardized assessments, but also more specifically how reliability is determined and how much error is likely. With so much emphasis today on high-stakes testing for promotion, graduation, teacher and administrator accountability, and school accreditation, it is critical that all educators understand concepts like standard error of measurement, reliability coefficients, confidence intervals, and standard setting. Two reliability principles deserve special attention. The first is that reliability refers to scores, not instruments. Second, teachers and administrators need to understand that, typically, error is underestimated. A recent paper by Rogosa (1999), effectively illustrates the concept of underestimation of error by showing in terms of percentile rank probable true score hitrate and test-retest results.

GOOD ASSESSMENT ENHANCES INSTRUCTION
Just as assessment impacts student learning and motivation, it also influences the nature of instruction in the classroom. There has been considerable recent literature that has promoted assessment as something that is integrated with instruction, and not an activity that merely audits learning's (Shepard, 2000). When assessment is integrated with instruction it informs teachers about what activities and assignments will be most useful, what level of teaching is most appropriate, and how summative assessments provide diagnostic information. For instance, during instruction activities informal, formative assessment helps teachers know when to move on, when to ask more questions, when to give more examples, and what responses to student questions are most appropriate. Standardized test scores, when used appropriately, help teachers understand student strengths and weaknesses to target further instruction.  Good assessment is valid Validity is a concept that needs to be fully understood. Like reliability, there are technical terms and issues associated with validity that are essential in helping teachers and administrators make reasonable and appropriate inferences from assessment results (e.g., types of validity evidence, validity generalization, construct under representation, construct-irrelevant variance, and discriminate and convergent evidence). Of critical importance is the concept of evidence based on consequences, a new major validity category in the recently revised Standards. Both intended and unintended consequences of assessment need to be examined with appropriate evidence that supports particular arguments or points of view. Of equal importance is getting teachers and administrators to understand their role in gathering and interpreting validity evidence.  Good assessment is fair and ethical Arguably, the most important change in the recently published Standards is an entire new major section entitled "Fairness in Testing." The Standards presents four views of fairness: as absence of bias (e.g., offensiveness and unfair penalization), as equitable treatment, as equality in outcomes, and as opportunity to learn. It includes entire chapters on the rights and responsibilities of test takers, testing individuals of diverse linguistic backgrounds, and testing individuals with disabilities or special needs. Three additional areas are also important:  Student knowledge of learning targets and the nature of the assessments prior to instruction (e.g., knowing what will be tested, how it will be graded, scoring criteria, anchors, exemplars, and examples of performance).  Student prerequisite knowledge and skills, including test-taking skills.  Avoiding stereotypes.

THE IMPLEMENTATION OF A QUALITY ASSESSMENT SYSTEM
As stated before, Quality Assurance became one of the main concerns of the Government. The consequence is the implementation of a system which allows Quality Assurance of Universities in terms of their reputation (by a Quality Assessment System which includes peer evaluation), in terms of their resources (students and teachers evaluation, equipment, spaces and management systems evaluation), and in terms of results (teachers publications, R&D projects' participation, performance of students and graduates, satisfaction of employers). Portuguese law stipulates that universities are autonomous in what concerns their own management; however schools found some difficulties to adopt management models that could fulfill their needs and that could also control and regulate their autonomy. This recent law, that stipulates Higher Education Institutions autonomy, includes academic, pedagogical and administrative aspects, and refers specifically the systematic Assessment of scientific areas. As a consequence, each school must evaluate their own under graduation and graduation Programs, using Assessment procedures as a tool to control and to improve the Quality of teaching and learning, as well as the procedures associated with them. Due to this situation in 1994 public and private Institutions of Higher Education in Portugal started to face the legal requirement of carrying out in a regular basis of Quality Assessment of their educational processes. The main objective of this law is;  To stimulate and to improve Quality of all university activities;  To inform and to show to society how the university is organized and how the inputs and outputs of its educational system are processed;  To promote dialogue among different schools;  To contribute to the reorganization of the national network of Higher Education Institutions.
As a consequence of Assessment processes, Government can implement some positive or negative measures, namely, by reinforcing or cutting financial support for the target Institutions. However, it is possible to identify two different perspectives of Quality Assessment: the Government perspective (quantitative), who wants to define indicators, to measure, to rank, to compare, to account, to control and to inform; and on the other side, the university perspective (qualitative), who is concerned about the formative nature of the process and aims to improve, to regulate and to identify its own capacities and limitations. Moreover, there are different perspectives of Quality as far as Higher Education Institutions are concerned, according to what can be referred as "different clients", namely:  The Government wants universities to accept as much students as possible: wants them to finish the Program and to get a degree of international level as soon as possible, and with reduced costs;  The Employers are concerned about the standard performance of the graduates;  The Universities are concerned about good academic training based on good knowledge transfer and good learning environment, and about the relation between teaching and research;  The Students want the Program to give enough options and enough time for personal development.
A written midterm and final exam that evaluates only a student's ability to memorize and recall information is inadequate in today's educational environment. Active learning curriculum objectives focus on the acquisition of knowledge and skills that will help students in their lives outside the classroom. Students need knowledge and skills that will help them function at a higher level then the rote learning skills once expected at college. Today's students need critical thinking, problem solving, communication, and human relations skills. Ongoing classroom assessment provides a continuous monitoring of student learning. Faculty receives ongoing feedback about their effectiveness, and students receive a measurement of their progress. Assessment may or may not require assigning A-B-C grades. Certainly quizzes and

International Letters of Social and Humanistic Sciences Vol. 50
formal tests can be part of the ongoing assessment process, but other methods of assessment should also be used to keep the class interesting and to provide a more accurate measurement of student learning.
Although assessment strategies used to depend on the course, most courses lend themselves to a variety of methods. A speech class lends itself to oral measures, such as oral reports and speeches. The same speech course could also use written measures by having students write reports on professional speakers. Students could also keep a journal of self-evaluation for the instructor to evaluate, or students could participate in cooperative learning groups by giving group speeches for assessment of both performance and human relations skills. Additionally, a speech course lends itself to classroom discussions where students are invited to speak about the course material while the instructor is able to assess which students are grasping the concepts and reaching the goals and which students need additional attention.
Regardless of the assessment strategies used, all assessment must focus on improving students' learning, with a secondary focus of improving teaching methods. Since assessment requires students' active participation in the process, it is to the teacher's advantage to get the students to buy into the assessment strategy. By continually showing your interest in students and your investment in their learning, students will be more motivated to participate in assessment methods. As students become more used to ongoing assessment, they will begin to see that ongoing assessment reinforces their learning and adds to their self-assessment skills.
Assessment strategies must be related to the course material and relevant to students' lives. Provide assessment strategies that relate to students' work, such as product analysis or portfolios. Have students use simulated activities for computer courses; keep a log of performance ratings or references, or role play job interviews, mock trials, or historical moments.
Assessment strategies may influence the students' final grade; however, the assessments themselves need not be graded. Assessment is for the purpose of improving students' learning rather than for providing evidence for grading students. Assessment strategies cry out for more indepth evaluation than an A-B-C grade allows. Assessments lend themselves to either written evaluations or one-to-one meetings with students.
Assessment strategies-whether of the individual, the course, or the entire program-give faculty an impressive tool to measure learning. With assessment, educators can find those students who need an extra hand, fine-tune their own teaching methods, or redesign whole programs.

VALIDITY AND RELIABILITY OF ASSESSMENT
Assessment makes teaching into teaching. Mere presentations-without assessment of what the learners have made of what you have offered them, is not teaching. So, assessment is not a discrete process, but integral to every stage of teaching, from minute to minute as much as module to module. And informal assessment (or evaluation) is going on all the time. Every time a student answers a question, or asks one, or starts looking out of the window, or cracks a joke, he is providing you with feedback about whether learning is taking place. It's more an evaluation of the teaching session than about his learning, but the two are inextricable. Assessment "reaches back" into the rest of teaching: in particular, poorly designed formal assessment regimes can severely hinder student learning and distort the process and subject matter. All assessment is ultimately subjective: there is no such thing as an "objective test". Even when there is a high degree of standardization, the judgment of what things are tested and what constitutes a criterion of satisfactory performance is in the hands of the assessor. However, we can still make every effort to ensure that assessment is valid, reliable and fair.

VALIDITY
Validity is an integrated evaluative judgment of the degree to which empirical evidence and theoretical rationales support the adequacy and appropriateness of inferences and actions based on test scores or other modes of assessment. A valid form of assessment is one which measures what it is supposed to measure.  It does not assess memory, when it is supposed to be assessing problem-solving (and vice versa).  It does not grade someone on the quality of their writing, when writing skills are not relevant to the topic being assessed, but it does when they are.  It does seek to cover as much of the assessable material as practicable, not relying on inference from a small and arbitrary sample (and here it spills over into reliability). There are three ways in which validity can be measured. In order to have confidence that a test is valid (and therefore the inferences we make based on the test scores are valid), all three kinds of validity evidence should be considered.

Type of Validity
Definition Example/Non-Example

Content
The extent to which the content of the test matches the instructional objectives.
A semester or quarter exam that only includes content covered during the last six weeks is not a valid measure of the course's overall objectives --it has very low content validity.

Criterion
The extent to which scores on the test are in agreement with (concurrent validity) or predict (predictive validity) an external criterion.
If the end-of-year math tests in 4th grade correlate highly with the statewide math tests, they would have high concurrent validity.

Construct
The extent to which an assessment corresponds to other variables, as predicted by some rationale or theory.
If you can correctly hypothesize that ESOL students will perform differently on a reading test than English-speaking students (because of theory), the assessment may have construct validity.
So, do all this talk about validity and reliability mean you need to conduct statistical analyses on your classroom quizzes? No, it doesn't. (Although you may, on occasion, want to ask one of your peers to verify the content validity of your major assessments.) However, you should be aware of the basic tenets of validity and reliability as you construct your classroom assessments, and you should be able to help parents interpret scores for the standardized exams.

RELIABILITY
A reliable assessment will produce the same results on re-test, and will produce similar results with a similar cohort of students, so it is consistent in its methods and criteria. Another measure of reliability is the internal consistency of the items. For example, if you create a quiz to measure students' ability to solve quadratic equations, you should be able to assume that if a student gets an item correct, he or she will also get other, similar items correct. The following table outlines three common reliability measures.

International Letters of Social and Humanistic Sciences Vol. 50
Type of Reliability How to Measure Stability or Test-Retest Give the same assessment twice, separated by days, weeks, or months. Reliability is stated as the correlation between scores at Time 1 and Time 2.

Alternate Form
Create two forms of the same test (vary the items slightly). Reliability is stated as correlation between scores of Test 1 and Test 2.
Internal Consistency (Alpha, a) Compare one half of the test to the other half. Or, use methods such as Kuder-Richardson Formula 20 (KR20) or Cronbach's Alpha.
The values for reliability coefficients range from 0 to 1.0. A coefficient of 0 means no reliability and 1.0 mean perfect reliability. Since all tests have some error, reliability coefficients never reach 1.0. Generally, if the reliability of a standardized test is above .80, it is said to have very good reliability; if it is below .50, it would not be considered a very reliable test.

DISCUSSION AND CONCLUSION
The assessment in higher education is very important to check the validity and completions of the course and facilities in the institutes. In Self Assessment Report every year or after every two years the institute is assessed through the internal and external assessment teams. In all higher education institution it is very important and after every term the assessment will be conducted through the internal and external teams to check the facilities and teaching as well as administrative facilities and knowledge transfer from the teachers to students. As a conclusion, we would like to point out some of the underlying principles of our Assessment procedures:  The Quality Control is under the responsibility of the Higher Education Institutions;  The Quality Assessment process results from a contract between political and academic Institutions;  This contract defines the rights and obligations of Governments and universities;  Political Institutions legislate over the Quality Assessment process, regarding all Higher Education Institutions and take care of their harmony and credibility promoting their development and the Quality improvement of the university activities;  The external evaluation is developed by experts, including foreign experts in the Visiting Committee;  The Quality Control depends on the existence of Quality Assessment Systems, which should include these features: self-evaluation should be compulsory, the audition of teachers and students should have representative levels, the analysis of the impact of the university activities should be done, and the transparency of the process should be respected, as well as the public announcement of the results. The Assessment System follows the general recommendations for the higher education in order to establish some basic principles that allow comparative analysis among different Higher Education Institutions.