Educational Assessment in Practice

Educational assessment in practice is the foundation of evaluating students’ learning direction, skills, and activities. Assessment tools and procedures provide educators with guidelines through a strategic process that includes gathering, analyzing, and interpreting data. In addition to evaluating student achievement, these evaluations are critical in guiding instructional practices and creating educational policies (Brown, 2022). Enhancing the total learning process, encouraging ongoing student development, and ensuring that teaching strategies continue to be efficient and adaptable to the changing requirements of students are the key objectives of educational evaluation. By concentrating on these goals, educational evaluation is a dynamic tool that promotes an atmosphere favourable to personal growth and academic success (Poirier and Wilhelm, 2019).

Both educational institutions and individual students are significantly affected by the complex results of educational assessment. Instructors can customize their teaching strategies to meet individual learning gaps and create an inclusive learning environment by using the information gained from evaluating students’ learning progress. These customized methods enhance learning results, empower students, and encourage them to take initiative with their education (Monteiro, Mata and Santos, 2021). Furthermore, evaluation results assist in the ongoing development of educational programs by ensuring that educational programs fulfil high-quality criteria and encouraging openness in the educational process. These findings establish the path for students’ holistic growth by providing the skills, information, and vision they need to succeed in their future professional and educational pursuits (Veugen, Gulikers and Brok, 2021).

Educational Assessment Artefacts

Educational assessment artefacts are a broad category of materials or evidence used in educational contexts to evaluate and measure student learning, development, and success. These artefacts included examination items, student work samples, articles or news segments, portfolios, and assessment rubrics with defined evaluation criteria (Levi and Lourie, 2019). Every artefact is evidence of students’ knowledge, abilities, and competencies in various subject areas. These resources provide a basic understanding of assessment methods, which also help instructors make informed decisions about curriculum modifications, individualized student support, and pedagogical approaches, leading to improved learning outcomes (DeLuca, Chin and Klinger, 2019). The selected five key assessment artefacts are the following;

Examination Items. These are standardized test questions or items used to measure students’ knowledge, understanding, and application of concepts within a specific subject area. Examination items are crucial in assessing learning outcomes, guiding instructional strategies, and offering a standardized measure to consistently compare and evaluate student performance (Guo et al., 2020).

Student Work Samples. Student work samples encompass tangible representations of students’ learning performance, such as assignments, projects, or essays, applied knowledge and skills. The samples provide educators with invaluable insights into the depth and quality of student learning. Teachers gain a holistic view of students’ abilities through these artefacts, facilitating tailored instructional approaches to address specific learning needs (Chen and Yang, 2019).

News Articles or Editorials. Publicly available texts assess students’ reading comprehension, critical thinking, and understanding of current events or diverse perspectives. By incorporating these articles or editorials into assessments, educators can assess students’ abilities to extract essential information, evaluate the credibility of sources, and construct well-reasoned responses (Delavan, Freire and Menken, 2021).

Portfolios. Collections of student work over time, demonstrating growth, achievements, and a wide range of skills. They offer a holistic view of progress, enabling reflection, goal-setting, and display of diverse abilities. Portfolios aid in personalized assessment, revealing learning processes and critical thinking skills and providing a deeper insight into students’ development (Pool et al., 2020).

Assessment Rubrics. Assessment rubrics are structured guidelines or scoring criteria employed to systematically evaluate student work, ensuring uniformity and clarity in assessment processes. Rubrics offer transparency regarding the expectations and standards for success in a particular assessment by outlining specific criteria and performance levels. They aid educators in maintaining fairness and objectivity by providing a clear roadmap for evaluating various aspects of student work (Panadero and Jonsson, 2020).

Analysis of Artefacts in Terms of Assessment Issues

Validity. Numerous educational assessment artefacts, such as exam items, student work samples, news articles or editorials, portfolios, and assessment rubrics, raise validity issues. When questions on an exam do not closely match the intended learning objectives or accidentally assess undesired skills, validity problems may occur. Test questions that are vague or ambiguous could cause misunderstanding, undermining the evaluation’s validity. Similarly, student work samples may not be legitimate if they do not fairly depict the knowledge or abilities of the student. This may happen if the samples accurately demonstrate the students’ understanding or effort (Gitomer et al., 2019). 

Validity problems arise when choosing news items or editorials for evaluation since they might fail to meet the desired competencies or present biased opinions that compromise an impartial assessment. Validity issues with portfolios may surface if the selected pieces do not meet the intended assessment criteria (Pool et al., 2020). The validity of assessment rubrics must be ensured by precisely defining the evaluation criteria; any vagueness or misalignment between the learning objectives and the rubric criteria can risk the assessment’s validity. To guarantee validity in these artefacts, learning objectives must be carefully aligned, evaluation criteria must be clear, and students’ real abilities and knowledge must be authentically assessed. Educators must continually assess and refine these artefacts to guarantee they effectively measure the intended competencies that reflect students’ ability levels (Colquitt et al., 2019).

Reliability. In order to consistently and reliably measure student performance and understanding in various circumstances, reliability difficulties with assessment artefacts are essential. When the same skills or knowledge are not consistently measured in an examination, questions or tasks raise reliability concerns. The dependability of tests may be compromised by inconsistent grading, unclear question phrasing, or variable difficulty. Due to subjective grading standards or varying assessor interpretations, student work samples may face reliability issues (Kurbanoğlu and Olcayturk, 2023). Without explicit instructions or uniform evaluation standards, work samples may not be graded consistently, which could result in disparate assessments of comparable student performances. The complexity or content of articles or editorials selected for assessment reasons may also vary, impacting the validity (Kurbanoğlu and Olcayturk, 2023). Although they offer a thorough picture of a student’s development, portfolios may not be dependable if the requirements for inclusion affect the consistency of the results of the educational assessments. When criteria are unclear or used inconsistently by different assessors, assessment rubrics can become unreliable and lead to inconsistent student judgments (Limpo et al., 2022).

In order to address these concerns about reliability, it is imperative to implement standardized procedures for creating test items and clarity in the grading for student work samples, standards for portfolios, and provide clear rubrics (Long and Wang, 2022). Enhancing the reliability of assessment findings requires a critical commitment to consistency in assessments’ design and grading methods across different artefacts. To further minimize reliability difficulties and ensure consistent and reliable evaluation of student learning and performance, assessors should review the assessment materials and criteria (Long and Wang, 2022).

Fairness. In educational evaluation, fairness is important for various artefacts, including exam items, student work samples, editorials or news stories, portfolios, and assessment rubrics. First, designing questions for exam items that are impartial and understandable to all students, regardless of their experiences or background, is essential to guaranteeing fairness. It requires eschewing linguistic, cultural, or socioeconomic prejudices that could harm particular communities (Troitschanskaia et al., 2019). Similarly, fairness requires that student work sample assessments are free of prejudices based on individual opinions, cultural preferences, or lenient grading policies. It encourages teachers to assess student work, considering various perspectives without showing bias towards student backgrounds. Fairness in news articles and editorials refers to selecting materials that reflect a wide range of opinions and cultural backgrounds that can be sensitive or biased toward specific groups (Rasooli et al., 2019). Furthermore, portfolio fairness necessitates an impartial assessment recognising the diverse range of experiences, cultures, and expressions depicted in the compiled works. In order to ensure fairness and equity in the grading and evaluation procedures, assessment rubrics must be designed to evaluate performance objectively, without prejudice towards specific expressions or methods (Shraim, 2019). Addressing unintentional biases or disparities is necessary to achieve fairness among these artefacts. Teachers must critically examine assessment materials to reduce prejudices about language, cultural allusions, or stereotypes that can benefit particular groups (Wafudu, Kamin and Marcel, 2022). In addition, uniform and open evaluation procedures are necessary for assessment fairness, guaranteeing that tests are equal for every student, irrespective of their circumstances or background. Teachers must establish an impartial, inclusive environment that promotes fairness in assessing students’ learning outcomes (Troitschanskaia et al., 2019).

Authenticity. Authenticity in educational assessment is crucial to ensure that artefacts accurately reflect and capture students’ talents, knowledge, and skills. Exam items are essential evaluation tools that should align with learning objectives and accurately reflect the complexity of the subject matter without using unclear content. Authenticity in student work samples is based on showcasing genuine efforts and comprehension of students’ educational experiences (Ajjawi et al., 2019). When evaluating news items or editorials, sources should be reliable, authentic publications that reflect a range of viewpoints and accurate data from real-world situations. Portfolios, as compilations of students’ work, should portray their growth, development, and reflections over time, providing an authentic snapshot of their learning journey. Similarly, assessment rubrics must authentically reflect the criteria by which student work is evaluated, ensuring transparency, consistency, and accuracy in assessment procedures without bias or artificial constraints (Sokhanvar, Salehi and Sokhanvar, 2021).

Upholding validity in developing, evaluating, and selecting material is necessary to maintain authenticity throughout these artefacts. Exam questions, student work samples, editorials or news pieces, portfolios, and assessment rubrics should all be carefully aligned with the learning objectives and free of any shallow or misleading content (Colthorpe et al., 2020). In order to provide a true reflection of students’ learning experiences and accomplishments, artefacts should accurately represent the efforts and talents of the students. Authenticity promotes trust in the assessment process by ensuring that the artefacts are utilized accurately and enabling educators to make knowledgeable decisions regarding instructional needs (Sokhanvar, Salehi and Sokhanvar, 2021).

Transparency. Transparency in educational assessment artefacts is crucial for ensuring clarity, openness, and comprehensibility in the assessment process. For examination items, transparency revolves around articulating the assessment’s purpose, format, and evaluation criteria. It involves providing students and educators with explicit information about the test structure, the specific skills or knowledge being assessed, and the grading methodology employed (Lucander and Christersson, 2020). Similarly, transparency in student work samples necessitates the provision of clear guidelines and expectations regarding the criteria for assessment. Transparency in student work aims to give students a thorough understanding of how their work will be assessed, guaranteeing a clear measurement of their performance. Transparency in assessment-related news articles or editorials includes the selection process, in which teachers must explain why they selected specific texts for analysis. In order to help students understand how these articles correlate with their exams, it clarifies that the articles align with the learning objectives (Hardwicke et al., 2020).

Transparency in portfolios refers to specific guidelines on gathering and arranging work samples to demonstrate developmental stages. It requires a clear explanation of the evaluation criteria for portfolios in order to create a mutual understanding between teachers and students (Hardwicke et al., 2020). Similarly, assessment rubrics demand transparency in delineating grading criteria and performance expectations. Clarity in rubrics aids students in comprehending the specific benchmarks against which their work will be assessed and allowing for self-reflection. Overall, transparency across these artefacts ensures a clear understanding of assessment expectations, promoting fairness and empowering students to meet academic standards (Delavan, Freire and Menken, 2021).

Human Judgment and Bias. Human judgment and bias can significantly impact the assessment of various artefacts used in educational evaluation. Regarding examination items, human judgment might influence the selection or formulation of questions, potentially introducing biases based on personal perspectives or preferences. Fairness in the assessment can be impacted by biases in the questions, unintentionally favoring specific cultural backgrounds or areas of knowledge (Feinstein and Waddington, 2020). Similarly, human judgment may lead to subjective interpretations in evaluating student work samples, resulting in grading influenced by personal biases or preconceptions about particular styles or approaches. This subjectivity might affect the consistency and fairness of grading, potentially disadvantaging or favouring certain students based on educators’ biases rather than the actual merit of the work (Wafudu, Kamin and Marcel, 2022).

When evaluating news stories or editorials, human judgment is essential in choosing texts for review. Biases may occur in selecting articles, inadvertently supporting specific points of view. Furthermore, the subjective interpretations of evaluators regarding these texts may impact the grading of students’ responses, which could affect the impartiality and fairness of assessment results (Feinstein and Waddington, 2020). Furthermore, human judgment may unintentionally introduce biases based on individual interpretations of criteria or expectations in portfolios and grading rubrics. Subjective judgments made by observers may impact the assessment’s consistency, resulting in grading standards that may not match the specified evaluation criteria and jeopardizing assessment accuracy and fairness (Carpenter, Witherby and Tauber, 2020).

Narrative to Relate Artefacts with Each Other

Exam items, student work samples, articles or editorials, portfolios, and assessment rubrics are examples of the different artefacts used in educational assessment and have an immense effect on the assessment process. In addition to emphasizing a particular aspect of student learning, each artefact raises issues of validity, human judgment, bias, and justice. The foundation of formal evaluation examination items must closely correspond to learning objectives to ensure validity. Educators encounter validity challenges in assessing student work samples due to the essential subjectivity in grading (Carpenter, Witherby and Tauber, 2020).

Articles and news items used in reading comprehension and critical thinking tests pose challenges with selecting unbiased content. In order to provide students with a variety of viewpoints and minimize subjective interpretations, educators must negotiate potential biases in the educational assessment. The legitimacy of authentic representation is a problem for portfolios, which attempt to capture a comprehensive picture of students’ progress. Educators must carefully evaluate students’ performance and reduce personal interpretations to represent a range of viewpoints without bias (Wafudu, Kamin and Marcel, 2022). Ultimately, human judgment is an issue in assessment rubrics intended to address and mark student academic performance. In order to provide accurate assessment, educators must implement fairness in order to reduce evaluation biases. These artefacts interconnect through their shared issues of validity, human judgment, bias, and fairness, reinforcing educators’ need to navigate these challenges collectively. The examination items’ alignment with learning objectives directly influences the authenticity of student work samples and portfolios (Baig, Shuib and Yadegaridehkordi, 2020). Meanwhile, the unbiased selection of news articles impacts the diverse content in portfolios and influences assessment rubrics’ criteria. Human judgment pervades each artefact, impacting its validity and fairness, emphasizing the interdependence and complexity inherent in educational assessment. As educators navigate these artefacts, their interconnectedness underscores the need for a comprehensive approach that addresses these issues collectively, ensuring fair, valid, and meaningful assessment practices (Brown, 2022).

Conclusion (Educational Assessment in Practice)

The essence of educational assessment centres upon an array of artefacts serving as fundamental tools to measure student learning and advancement. Analyzing artefacts through the lenses of validity, reliability, fairness, authenticity, transparency, human judgment, biases, and dilemmas showcases the intricate complexities of the assessment process. Validity underscores the necessity for assessments to accurately measure the intended learning objectives, while reliability seeks consistency and dependability in assessment outcomes over time. Fairness becomes paramount, ensuring equitable evaluation without favouring any particular group. Authenticity demands assessments genuinely reflect students’ comprehension and skills, whereas transparency necessitates clarity in assessment processes. The pervasive influence of human judgment and biases potentially impacts assessment objectivity and fairness, necessitating careful mitigation of biases. 

References

Ajjawi, R., Tai, J., Nghia, T.L., Boud, D., Johnson, L. and Patrick, C.-J. (2019). Aligning assessment with the needs of work-integrated learning: The challenges of authentic assessment in a complex context. Assessment & Evaluation in Higher Education, 45(2), pp.1–13. https://doi.org/10.1080/02602938.2019.1639613.  

Baig, M.I., Shuib, L. and Yadegaridehkordi, E. (2020). Big data in education: a state of the art, limitations, and future research directions. International Journal of Educational Technology in Higher Education, 17(1). https://doi.org/10.1186/s41239-020-00223-0

Brown, G.T.L. (2022). The past, present and future of educational assessment: A transdisciplinary perspective. Frontiers in Education, 7. https://doi.org/10.3389/feduc.2022.1060633

Carpenter, S.K., Witherby, A.E. and Tauber, S.K. (2020). On students’ (Mis) judgments of learning and teaching effectiveness. Journal of Applied Research in Memory and Cognition, 9(2), pp.137–151. https://doi.org/10.1016/j.jarmac.2019.12.009 

Chen, C.H. and Yang, Y.C. (2019). Revisiting the effects of project-based learning on students’ academic achievement: A meta-analysis investigating moderators. Educational Research Review, 26(26), pp.71–81. https://doi.org/10.1016/j.edurev.2018.11.001.  

Colquitt, J.A., Sabey, T.B., Rodell, J.B. and Hill, E.T. (2019). Content validation guidelines: Evaluation criteria for definitional correspondence and definitional distinctiveness. Journal of Applied Psychology, 104(10), pp.1243–1265. https://doi.org/10.1037/apl0000406.  

Colthorpe, K., Gray, H., Ainscough, L. and Ernst, H. (2020). Drivers for authenticity: Student approaches and responses to an authentic assessment task. Assessment & Evaluation in Higher Education, pp.1–13. https://doi.org/10.1080/02602938.2020.1845298

Delavan, G.M., Freire, J.A. and Menken, K. (2021). Editorial introduction: A historical overview of the expanding critique(s) of the gentrification of dual language bilingual education. Language Policy, 20(3), pp.299–321. https://doi.org/10.1007/s10993-021-09597-x 

DeLuca, C., Chin, A.C. and Klinger, D.A. (2019). Toward a teacher professional learning continuum in assessment for learning. Educational Assessment, 24(4), pp.267–285. https://doi.org/10.1080/10627197.2019.1670056

Feinstein, N.W. and Waddington, D.I. (2020). Individual truth judgments or purposeful, collective sense making? Rethinking science education’s response to the post-truth era. Educational Psychologist, 55(3), pp.155–166. https://doi.org/10.1080/00461520.2020.1780130

Gitomer, D.H., Martínez, J.F., Battey, D. and Hyland, N.E. (2019). Assessing the assessment: Evidence of reliability and validity in the EDTPA. American Educational Research Journal, 58(1). https://doi.org/10.3102/0002831219890608

Guo, P., Saab, N., Post, L.S. and Admiraal, W. (2020). A review of project-based learning in higher education: Student outcomes and measures. International Journal of Educational Research, 102(1). https://doi.org/10.1016/j.ijer.2020.101586

Hardwicke, T.E., Wallach, J.D., Kidwell, M.C., Bendixen, T., Crüwell, S. and Ioannidis, J.P.A. (2020). An empirical assessment of transparency and reproducibility-related research practices in the social sciences (2014–2017). Royal Society Open Science, 7(2). https://doi.org/10.1098/rsos.190806.  

Kurbanoğlu, N.İ. and Olcayturk, M. (2023). Investigation of the exam question types attitude scale for secondary school students: Development, validity, and reliability. Sakarya University Journal of Education, 13(2), pp.191–206. https://doi.org/10.19126/suje.1187470

Levi, T. and Lourie, L. (2019). Assessment literacy or language assessment literacy: Learning from the teachers. Language Assessment Quarterly, pp.1–15. https://doi.org/10.1080/15434303.2019.1692347

Limpo, T., Rasteiro, I., Aguiar, S. and Magalhães, S. (2022). Examining the factorial structure, reliability, and predictive validity of the Portuguese version of the child and adolescent mindfulness measure (CAMM). Mindfulness, pp.2879–2890. https://doi.org/10.1007/s12671-022-02003-5

Long, H. and Wang, J. (2022). Dissecting reliability and validity evidence of subjective creativity assessment: A literature review. Educational Psychology Review, pp.1399–1443. https://doi.org/10.1007/s10648-022-09679-0

Lucander, H. and Christersson, C. (2020). Engagement for quality development in higher education: A process for quality assurance of assessment. Quality in Higher Education, 26(2), pp.135–155. https://doi.org/10.1080/13538322.2020.1761008

Monteiro, V., Mata, L. and Santos, N.N. (2021). Assessment conceptions and practices: Perspectives of primary school teachers and students. Frontiers in Education, 6. https://doi.org/10.3389/feduc.2021.631185  

Panadero, E. and Jonsson, A. (2020). A critical review of the arguments against the use of rubrics. Educational Research Review, 30(1). https://doi.org/10.1016/j.edurev.2020.100329

Poirier, T.I. and Wilhelm, M. (2019). Scholarly and best practices in assessment. American Journal of Pharmaceutical Education, 82(3), pp.67–69. https://doi.org/10.5688/ajpe6769

Pool, A.Oudkerk., Jaarsma, A.D.C., Driessen, E.W. and Govaerts, M.J.B. (2020). Student perspectives on competency-based portfolios: Does a portfolio reflect their competence development? Perspectives on Medical Education, 9(3), pp.166–172. https://doi.org/10.1007/s40037-020-00571-7

Rasooli, A., DeLuca, C., Rasegh, A. and Fathi, S. (2019). Students’ critical incidents of fairness in classroom assessment: An empirical study. Social Psychology of Education, 22(3), pp.701–722. https://doi.org/10.1007/s11218-019-09491-9

Shraim, K. (2019). Online examination practices in higher education institutions: Learners’ Perspectives. Turkish Online Journal of Distance Education, 20(4), pp.185–196. https://doi.org/10.17718/tojde.640588

Sokhanvar, Z., Salehi, K. and Sokhanvar, F. (2021). Advantages of authentic assessment for improving the learning experience and employability skills of higher education students: A systematic literature review. Studies in Educational Evaluation, 70(11). https://doi.org/10.1016/j.stueduc.2021.101030

Troitschanskaia, O., Schlax, J., Jitomirski, J., Happ, R., Thees, C., Brückner, S. and Pant, H.A. (2019). Ethics and fairness in assessing learning outcomes in higher education. Higher Education Policy, 32(4), pp.537–556. https://doi.org/10.1057/s41307-019-00149-x

Veugen, M.J., Gulikers, J.T.M. and den Brok, P. (2021). We agree on what we see: Teacher and student perceptions of formative assessment practice. Studies in Educational Evaluation, 70(27). https://doi.org/10.1016/j.stueduc.2021.101027

Wafudu, S.J., Kamin, Y.B. and Marcel, D. (2022). Validity and reliability of a questionnaire developed to explore quality assurance components for teaching and learning in vocational and technical education. Humanities and Social Sciences Communications, [online] 9(1), pp.1–10. https://doi.org/10.1057/s41599-022-01306-1

Need Assignment Help

Leave a Reply