The Impact of Using Rubrics as Assessment Tools on Pupils’ Academic Performance in Senior Secondary Mathematics: A case of Malela Secondary School, Copperbelt Province, Zambia

This study was conducted to show whether or not using rubrics as an assessment tool has an impact on pupils’ academic performance in Mathematics at senior secondary school level. The problem of poor performance of Grade 12 pupils in Mathematics has been a matter of concern, especially at Malela secondary school. In this study, a rubric served as an assessment tool to gather information about pupils’ achievement levels in demonstrating both knowledge and skills. The study sample population included two (2) grade 12 classes, were one class was the experimental group (N=55) and the other one was a control group (N=55) and Six (6) teachers of Mathematics. The two (2) grades twelve (12) classes were selected from the target population of the five (5) grade twelve classes using simple random sampling, and the six (6) teachers were selected purporsively.The study was based on three (3) research questions and two hypotheses. The research method used was a mixed approach and the research design was a quasi-experimental design. Three topics were taught to both the experimental and control groups which included probability, geometrical transformations and linear programing using problem based learning strategies. The experimental group was assessed using rubrics while the control group was assessed using conventional forms of assessment. The two groups were subjected to a pre-test, end of topic tests and a post-test. Data was also collected using teacher questionnaires and classroom observations. Quantitative data was analyzed using SPSS version 20, where an independent sample t-test was conducted at alpha (α) = 0.05 significance level and qualitative data was analyzed by categorizing, describing and explaining. The study revealed that using rubrics for assessment was found to have a positive impact on pupils’ academic performance as the experimental group outperformed the control group. The study also revealed that Mathematics teachers had a limited way of assessing pupils in class. Hence the reason to recommend that alternative assessment tools (rubrics) should be made mandatory to teachers of Mathematics at senior secondary school level.


Introduction
Assessment plays an important role in examining students understanding of a concept or skills being taught and it should include a variety of feedback as well as discussions to measure and evaluate the quality of the learning process that takes place (Stiggins, 2002).What matters most is not so much the form of the assessment, but how the information gathered is used to improve teaching and learning in that it should pin point pupils' strengths and weaknesses. In mathematics education, assessment provides evidence of pupils' learning as it plays a critical role in monitoring competency levels as well as focusing on individual learning. Therefore, the assessment process should be integrated with classroom activities so that it can improve the effectiveness of the teaching and learning process and this call for teachers to make their assessment practices more transparent to pupils, through the use of rubrics (Jonsson, 2014). A rubric is an assessment tool that gives the criteria for a task and articulates gradations of quality, for each criterion from excellent to poor (Goodrich, 1997).A rubric is one of the tools which defines what is expected in a learning situation and make the assessment more meaningful, assisting in clarifying expectations as well as providing better feedback (Montgomery, 2000). It lists the things that pupils must do in order to receive a certain grade (Popham, 1997;. Harmon (2001) also pointed out that if teachers expect the best from pupils, pupils need to know what the teachers are looking for in any particular task, and it also could be true that human minds cannot work in an identical way, as pupils mighty emphasize on their own focused aspect during a particular exercise or task, while the teacher might have something else in mind to focus on. Therefore, perhaps rubrics with their structured direction can remove these discrepancies of thought and intrinsically motivate pupils to use them for learning, especially when they are trained and guided on their use (Egodawatte, 2010). The essential features of an assessment rubric include an evaluation criteria, task description, performance description and levels of performance. The task description defines a specific objective to be accomplished, an evaluation criteria describes the extent of proficiency required for a given task, the levels of performance describes how well or poorly any given task has been performed, and it can be expressed in qualitative form or quantitative form and the performance description contains at least a description or the highest level of performance in that dimension (Mertler,2001).Rubrics are categorized into two main groups, namely, Holistic and Analytic. (Reddy & Andrade, 2010). Holistic rubrics obligates the teacher to group several different assessment criteria and assess them as a whole, without judging the constitutes separately, (McGatha&Darcy, 2010).On the other hand, analytic rubrics considers criteria one at a time and evaluate them separately and comprehensively. Brookhart (2013) in her discussion stated that analytic rubrics are useful in the classroom since the scoring criteria can help teachers and pupils identify pupils' strengths and weakness while the holistic is usually used for large-scale assessment because it is thought to be easy to use, but both types of rubrics are used to maintain consistency in marking across students' work as well as between different markers. Song (2006) added that providing feedback in a classroom through rubrics has been found to have a powerful impact on pupils' learning and seen as an inherent catalyst for self-regulated activities, hence the reason to consider a rubric as an assessment tool. Rubrics provide timely and constructive feedback to help learners understand how to conceptualize high levels of achievement in performance, (Moskal,2000).In this regard, if rubrics are to be used in any teaching and learning process, the learners need to be exposed to them (rubrics) before any given task so as to allow the teacher(s) to explain their expectations, standards and evaluation criteria, (Andrade, 2001).Where and when a rubric is used does not depend on the grade level or subject, but rather on the purpose of the assessment (Brookhart, 2003).

Statement of the problem
The use of an appropriate assessment tool can help teachers identify the problems faced by pupils while obtaining feedback on teaching activities (Stiggins, 2002). However, the teacher is the key assessment grader who determines the score of a pupils' work. The grading process depends on the respective Mathematics Teacher who sets a marking scheme in assessing pupils' learning. Some teachers express a belief that Mathematics is about getting the right or wrong answer and not to take account of the mathematical process (methods). According to Watson (2006), appropriate assessment in Mathematics involves paying attention to the method of solving any mathematical problem before looking at the final answer, be it a correct or wrong answer. Evidence from the Examination Council of Zambia (ECZ) has revealed that pupils' low performance in Mathematics at senior secondary school level has been of great concern for some time (ECZ,2017),which led to a number of interventions, such as the introduction of Continuing Professional Development (CPD) and Subject Meeting at Resource Centre (SMARC) in schools. All the interventions by the Ministry of Education were in a bid to improve the teaching and learning of mathematics and other subjects (MOE, 1996). Despite all the interventions by the Ministry of Education (MOE), evidence from the Examination council of Zambia (ECZ) revealed that low performance in Mathematics could be partly attributed to the way teachers mark exercises in class, by just concentrating on the final answers, without really looking at the thought process, resulting into unconstructive and none informative feedback which cannot even be used to improve pupils' future performance. (ECZ, 2014).Therefore, this study is intended to investigate the impact of using an assessment rubric on pupils' performance in Mathematics at senior secondary school level.

Research objectives
The study was guided by the following objectives: 1. To assess whether a rubric is an effective tool in the teaching and learning of Mathematics at senior secondary school level 2. To establish classroom assessment tools employed by teachers of Mathematics. 3. To determine the impact of using rubrics as an assessment tool on academic performance in Mathematics.

Research Questions
The objectives of this study were realized by pursuing answers to the following questions.  Vol.9, No.7, 2019 1. Are rubrics effective assessment tools in the teaching and learning of Mathematics? 2. What type of classroom assessment tools are employed by Mathematics teachers? 3. What is the impact of using rubrics as an assessment tool on academic performance of pupils in Mathematics (Probability, Geometric Transformation and Linear Programming).

Literature Review 2.1 A Rubric as an Assessment Tool.
An assessment tool is a specialized tool for assessing learners' ability, including the strengths and weaknesses that can be as an input into the teaching and learning process. According to Moskal (2003), an effective assessment tool is a tool which can allow learners to track their progress overtime and enable educators to report on the progress made by learners, whether they have achieved a particular learning outcome or not, is one. Lovorn and Rezaei (2010) posited that an assessment rubric is regarded as an effective and sufficient tool because it possesses the trilogy principles which are validity, reliability and practicability. Livingstone (2012) also added that a rubric provides an objectivity that is not found in other assessment tools as it ensures that teachers have a basis for their final assessment 2.2 The role of rubrics in the teaching and learning process. An assessment rubric orient teachers to their goals as it is used to clarify the learning goals, design instructions that addresses those goals, communicate the goals to pupils, guide the teachers on pupil's feedback as well as judging the final product in terms of the degree to which the goals have been met (Goodrich 2000). Additionally, assessment rubrics keep teachers fair and unbiased in their grading as they would struggle with the temptation to assign grades based on irrelevant things and one teacher commented that rubrics keep them honest .Through providing a more objective evaluative format and decrease subjectivity of pupils' evaluation, rubrics also help improve performance as they reduce uncertainty and ambiguity.

Using an Assessment Rubric in Mathematics Education.
Mathematics is one of the core subjects taught at all levels of education in Zambia but learners seem to shy away from the subject for many reasons. In this regard, Okereke (2006) pointed out that luck of good assessment practices from teachers of Mathematics may also be a contributing factor as to why learners shy away from mathematics. In Mathematics education, any mathematical problem solving is a performance assessment that requires judging a learner's overall performance on a mathematical concept, procedures processes and disposition towards mathematics. Wiggins (1990) added that using an assessment rubric in Mathematics breaks the use of traditional assessment that is only based on the correctness of pupil's answer and it also reduces subjectivity in scoring any mathematical problem. However, the focus of assessment in Mathematics should not only be the answers provided by the learners, but also how it reflects the learners' learning process skills used in solving any mathematical problem as well as their development of abilities and positive values and attitudes, (Bond,2003).Furthermore, the assessment practices that are needed in Mathematics are those that integrate learning activities that supports pupil's construction of knowledge and that reflects the diversity found in the curriculum and in learners themselves. Learners are also expected to justify their answers, be it written, pictorial, graphical or algebraic, through mathematical communication and this is what brings about the use of an assessment rubric in Mathematics, (Meier et al, 2006)

Experiences from Europe, Asia and America
A study conducted in Spain by Arrufat et al (2014) on integrating a rubric as an assessment for learning tool in a secondary Mathematics classroom enhanced learning which translated into improved performance. In their study, Arrufat et al (2014) reported that using rubrics in assessment helped the students to know the expectations of their teacher, made the students very active participants in the teaching and learning process, increased the ability of students to judge a quality performance and enhanced facilitating constructive and selfregulated learning and this was in line with the study by Hafner and Hafner (2003).On the contrary, Arrufat et al (2014) reported that using a rubric for assessment was indeed time consuming. In another study conducted by Lee et al (2015) in Malaysia entitled, "A Marking Scheme Rubric: To assess students' Mathematical knowledge for a test on Vectors", it was revealed that there was a fair judgment about students' problem solving works, as the expectations were made clear to the instructor and the students, which facilitated the awareness of the learning objectives and helped to be thoughtful about the quality of teaching and learning. Similar findings were reported by , that using rubrics in assessment increase pupil's awareness of learning goals clarify teacher's expectations and explain the criteria needed to meet a quality performance.
www.iiste.org ISSN 2224-5804 (Paper) ISSN 2225-0522 (Online) DOI: 10.7176/MTM Vol.9, No.7, 2019 Schafer et al (2007), in their study on effects of teacher knowledge of rubrics on students achievement in four content areas which included Algebra, Biology, English and Government affairs, noted that amongst all the four content subject areas, algebra results produced a learning advantage which was generalized across assessment format, meaning that the pupils whose teachers underwent rubric training performed better than those who did not do the training. This implied that when teachers clearly explain the expected achievement levels in their instruction, it leads to higher performance on tests. However, the importance of students' understanding of rubrics is stressed again in a study by Andrade, Du and Wang (2008), which reported positive effects of rubrics on students' performance. In Switzerland, Smit et al (2007) also did a study on the effect of a rubric for mathematical reasoning on the teaching and learning of Mathematics and reported that rubrics had no effect on learners' mathematical reasoning but goals where made clear. In support of these findings, a study done by Panadero et al (2012) reported that pupils showed higher self-regulation competencies after using rubrics. An action research report by Weitzenkamp and Hoper (2008) entitled "Rubric assessment of Mathematical Processes in Homework", articulated that there was an improvement in communication between the teacher and the pupils as they did homework presentations in class and pupils became responsible for their own learning by providing justification to solutions through words and pictures and this translated into improved performance.

Experiences in Africa
In a study conducted by Chukwuyenum and Adeleye (2013) on the impact of peer assessment on performance in Mathematics among senior secondary school students ,it was revealed that students who received training on peer assessment using rubrics did better in a Mathematics test than those that did not receive training. The reason for the difference was attributed to acquisition of knowledge in peer assessment which was infused in the teaching of Mathematics, as students were able to learn from each other as they graded each other's work. In support of these findings, Wiggins (1998) commented that peer assessment is inseparable from any assessment that is aimed at improving learning. Nevertheless, a lot of such studies were done abroad and in most cases at high education level. In any case not much research has been done in Zambia on the impact of rubrics as assessment tools, though it was one of the assessment tools the Ministry of Education (MOE) advocated for in the effective curriculum implementation strategies through the Zambia Education Curriculum Frame Work (2013).Therefore, considering the above studies, it can be said that rubrics help pupils to focus on their work, produce work of higher quality, earn better grade and feel less anxious about their work.

Methodology. 3.1Target Population
This study was conducted at Malela Secondary School on the Copperbelt province of Kitwe district in Zambia. It was conducted among all the five (5) grade twelve (12) pupils taking Ordinary Mathematics as the target population, with each class of fifty five (55) pupils, and all the ten (10) teachers of Mathematics.

Sample Size and Sampling Procedures
The sample size had 110 pupils from two (2) classes, who were selected from the five (5) classes using Simple Random Sampling and assigned one class to the experimental group and the other group to the control group using coin flip. The six (6) teachers of Mathematics were also selected from the ten (10) teachers using Purposive Sampling.

Research Design
The research design that was used in this research was a quasi-experimental design and the participants were not randomly assigned to the control and the experimental groups (Creswell, 2009). A quasi experimental research design is an empirical study used to estimate the casual impact of an intervention on its target population. This research design was more appropriate for this study as it investigated the impact of an assessment rubric on pupils' academic performance in Mathematics at senior secondary school level. A mixed method approach was used, in that both qualitative and quantitative methods were utilized in analyzing data.

Pre-Test ,End of topic Tests and Post-Test Measurements
The pre-test was administered to both the control and the experimental groups in order to help establish the homogeneity of the groups. The post-test and end of topic tests were administered after the treatment in order to determine the group which achieved higher than the other. The same test was used as a pre-test as well as a posttest and it had three (3) long questions one from probability, linear programming and geometric transformations.

Teacher Questionnaire.
A questionnaire had three parts that is, sample traits, likert items and open ended questions.

Classroom Observations.
The same teachers who completed the questionnaires were observed using an observation schedule which was in the form of questions to be answered by the researcher, in order to compare with their responses from the questionnaire. Pupil involvement was also compared through observation between the experimental and the control groups.

Data Collection Procedure
Data was collected using the following instruments teacher questionnaire ,achievement tests (pre-test , post-test and end of topic tests ) and classroom observations, in order to answer the research questions. Pupils achievement from pre-test, post-test and end of topic tests made up quantitative data while questionnaire responses and classroom observations made up qualitative data. The data collection procedures were divided into three stages; During the Pre-intervention stage, the pre-test was administered to both the experimental and the control group and the teacher questionnaires were administered; During the Intervention/Implementation stage, the experimental group was trained on how to use rubrics for assessment which was adopted(modified) from Niosi (2012),the two groups were taught probability, linear programming and geometric transformations using Problem Based Learning Strategies (PBLS) and the experimental group was assessed and graded using assessment rubrics, while the control group was assessed and graded using conventional forms or assessment and classroom observations were done; Post intervention stage, this was after the administration of the treatment to the experimental group and the end of topic tests as well as the post-test was administered at this stage.

Data Analysis.
The data collected was analyzed using statistical Package for Social Science (SPSS) programme version 20 ,where the descriptive statistics were computed for both pre-test and post-test as well as end of topic tests. The mean, standard deviation and frequencies were generated under descriptive statistics. Descriptive statistics provides simple summaries about the sample and made no predictions. (Trochin, 2006).Before an independent sample t-test was performed, the data from the pre-test, end of topic tests and post-test was first tested for normality using Shapiro-Wilk test. Testing for normality is one of the assumptions data must meet in order for an independent sample t-test to give valid results. The null hypothesis (H0) was that the data is approximately normal and its alternative hypothesis (H1) was that data is not approximately normal. It follows that if the p-value is greater than alpha (α) = 0.05 (P>0.05), then do not reject the null hypothesis and conclude that data is normally distributed. If the p-value is less than alpha (α) = 0.05(P<0.05), then the null hypothesis would be rejected and conclude that data was not normally distributed. An independent sample t-test analysis was done on the pre-test, post-test and end of topic tests, in order to test the level of degree of significance between the two group's means (experimental and control) being compared at alpha (α) = 0.05 level of significance. According to the institute for digital research and education (2014) an independent t-test can be designed to compare means of the same variable between two groups. The qualitative data which was collected from a questionnaire for teachers and classroom observations were analyzed by categorizing, describing and explaining.

Findings from the teacher questionnaire
The response of teachers towards the use of rubrics was discussed by making use of the questionnaire. According to Monkey (2014), a likert scale measures attitudes and behavior using answer choices that ranges from one extreme to another. Teachers asked on a 5-point likert scale whether they totally agree=1, agree=2, neutral=3, disagree=4 and totally disagree=5.This helped to determine the degree of agreement about the use of rubrics as an assessment tool.
Label Likert scale items 1 I use rubrics to assess my pupils in class. 2 A rubric is a good assessment tool. 3 Rubrics improve pupils' performance. 4 Rubrics make pupils aware of what the teacher is looking for in a particular task. 5 Using rubrics in assessment enhance pupil's learning. 6 Using rubrics in assessment makes pupils active in their learning. 7 Rubrics are an effective assessment tool in the teaching and learning of mathematics. Rubrics promote peer and self-assessment. 9 Rubrics help to identify the strength and weakness of pupils. 10 Rubrics can be used to provide the teacher with feedback about the pupils 'understanding. Table 4.1: Teachers' responses on rubrics as an assessment tool.

Frequency and Percentages of each response
Key's=Totally Agreed (TD),Disagree (D),Neutral (N),Agree (A),Totally Agree(TA). Table 4.1above, shows the responses from six (6) teachers of Mathematics in terms of frequencies and their corresponding percentages. Brown (2011) pointed out that it is not a good idea to rely heavily on interpreting single items because single items are relatively unreliable. He further advised that likert scales contain multiple items that are likely to be more reliable than single items. The data collected in table 4.1 had very few or no responses under "Totally Disagree" and "Disagree". As a result of this observation and for easy analysis of data, the responses of totally disagree and disagree were converted to disagree as they were considered to be indications of negative attitude, while the responses of totally agree and agree were converted into agree as they were considered to be indications of positive attitude.
Item 1 investigated on whether teachers use rubrics to assess their pupils in class. The results revealed that 16.7%(1) totally disagreed,16.7%(1) disagreed, no teacher expressed the opinion that he or she was not sure,33.3%(2) agreed and 33.3%(2) totally agreed. Using the conversions, 33.3% disagreed and 66.7% agreed.
Item 2 investigated whether a rubric is a good assessment tool. The results revealed that no teacher expressed his or her opinion that it was not a good assessment tool,16.7% (1) diagreed,33.3%(2) were not sure,33.3%(2) agreed and 16.7%(1)totally agreed. After using the conversions, 33.3% disagreed and 50% agreed.
Item 4 was used to investigate whether using rubrics make pupils aware of what the teacher would be looking for in a particular task. The results showed that no teacher totally disagreed, 16.7%(1) disagreed, no teacher expressed the opinion that he or she was not sure,50%(3) agreed and 33.3%(2) totally agreed. Using the conversions, the 16.7% agreed and 83.3% agreed.
Item 5 dealt with whether using rubrics in assessment enhance pupils' learning. The results revealed that 16.7% (1) Vol.9, No.7, 2019 Item 6 investigated whether using rubrics in assessment makes pupils active in their learning. The results showed that no teacher totally disagreed or disagreed,33.3%(2) of the teachers were not sure,50%(3) of the teachers agreed and 16.7%(1) totally agreed. Using the conversions, no teacher disagreed and 67.7% agreed.
Item 7 was used to investigate whether a rubric is an effective assessment tool in the teaching and learning of Mathematics. The results showed that 16.7%(1) totally disagreed, another 16.7%(1) disagreed while no teacher expressed his or her opinion that they were not sure,33.3%(2) agreed and 33.3%(2) totally agreed. After using the conversions, the results revealed that 33.4% disagreed and 66.6% agreed.
Item 8 investigated whether rubrics promote peer and self-assessment. The results revealed that no teacher totally disagreed or disagreed that rubrics promote peer and self-assessment, 16.7 % (1) was not sure,16.7%(1) agreed and 66.7%(4) totally agreed. After carrying out the conversions, the results revealed that no one disagreed and 83.4% agreed.
Item 9 dealt with whether rubrics help to identify the strength and the weakness of pupils. The results showed that no teacher totally disagreed, 16.7 % (1) disagreed, no teacher expressed the opinion that he or she was not sure, 66.7 %(4) agreed and 16.7%(1) teachers totally agreed. Using the conversions, 16.7% of teachers disagreed and 83.4% of teachers agreed.
Item 10 was used to investigate whether rubrics can really be used to provide the teacher with feedback about the pupils understanding. The results revealed that 16.7 %( 1) totally disagreed, no teacher expressed the opinion that he/she disagreed or not sure, 50 % (3) of the teachers agreed and 33.3% (2) of the teachers totally agreed. After carrying out the conversions, the results revealed that 16.7% of the teachers disagreed and 83.3% of the pupils agreed.

Teachers' responses on open ended questions from the questionnaire
This information was collected with a view of understanding the impact of using rubrics as an assessment tool in mathematics at senior secondary school level, as well as the challenges faced by teachers and pupils when using rubrics. It also requested for other types of assessment tools that could be used in Mathematics. Question One states: What are some of the difficulties in using rubrics as an assessment tool in mathematics? The first challenge that was mentioned was luck of experience in using a rubric on the side of pupils as well as the teachers and this accounted for 50%.The second challenge that was mentioned was that using rubrics is time consuming, which accounted for 33.3%.The third challenge was that teachers find it difficult sometimes to come up with the suitable language that could be used when developing and using rubrics with pupils, and this also accounted for 16.7%. Question Two States: Do you really think the use of rubrics can improve pupil's performance in Mathematics? 83.3% of the teachers agreed that the use of rubrics can improve pupils' performance and the following were the reasons;-When they are used properly by both the teacher and the pupils by making sure that pupils are trained on how to use them, making pupils very active in class (pupil centered) as it promotes constructivism learning, helps pupils to develop self and peer assessment skill through group work and when the rubrics are developed together with the pupils, it gives them an ideal of what their teachers are looking for.16.7% disagreed and said that the use of rubrics has got nothing to do with the performance of pupils. Question Three states; what other type of assessment tools do you use during the teaching and learning of Mathematics? The highest mentioned assessment tool used in a Mathematics class was end of term test, which accounted for 50%, class exercises and homework accounted for 33.3% and group work presentations accounted for 33%.  In both cases the p-value was greater than alpha (α) = 0.05 (P=0.259>0.05 and P=0.454>0.05).This indicated that the test scores from the pre-test were normally distributed and it implied that the independent sample t-test could be used on this data  Table 4.4 shows the independent sample t-test of the pre-test for the experimental and the control group. The first section shows the levene's test for equality of variance. According to levene (1960), levene's test is an inferential statistics used to assess the equality of variances for a variable, calculated for two or more groups. It is a test that is often used before a comparison of means. The independent sample t-test requires the assumption of homogeneity of variance (equal variances).With the null hypothesis for the levene's test as,H0:σ²1=σ²2 (Experimental group variance =Control group variance) and at alpha (α) = 0.05 level of significance, in table 4.4, the p-value was 0.446(sig) which was greater than 0.05 (P=0.446>0.05).The decision was that the null hypothesis was not rejected and concluded that there was no significance difference in variances of the two groups. Hence the reason to rely on the first row of the SPSS output for equal variances assumed. The second section on the independent sample t-test in table 4.4 was the t-test for equality of means. In table 4.4 above, the p-value (sig) was 0.300, which was greater than the alpha (α) = 0.05 (P=0.300>α=0.05, t=1.042).Since the pvalue was greater than 0.05 (P>0.05) the null hypothesis (H0) was not rejected. This indicated that there was no statistically significance difference in the pre-test results between the experimental and the control group and the two group means were equal, before administration of the treatment to the experimental group and the difference in the mean score which was 2.7% was not significant and it could have occurred by chance, and conclude was that the two groups were equivalent. Table 4.5,4.6 and4.7 shows the SPSS output after treatment for the experimental and control group.   Table 4.5, the p-value was o.498 for the experimental and 0.449 for the control groups. In both cases the p-value was greater than 0.05 (P=0.498>0.05 and P=0.449>0.05).This indicated that, the test scores from the post-test were normally distributed and it implied that the independent sample t-test could be used on the data.  In table 4.7, the first section shows the levene's test for equality of variance. Levene's test is an inferential statistics used to assess the equality of variances for a variable, calculated for two or more groups (Levene, 1960). The independent sample t-test requires the assumption of homogeneity of variance (equal variances).The null hypothesis for the levene's test was H0 :σ²1=σ²2(Experimental group variance=Control group variance). According to table 4.7, the p-value was 0.890 (sig) which was greater than 0.05 (P=0.890 > 0.05) .The decision was that the null hypothesis was not rejected and concluded that there was no significance difference in variances of the two groups i.e the experimental and the control groups. Hence the reason to rely on the first row of the SPSS output for equal variances assumed. The second section on the independent sample t-test in table 4.7 was the t-test for equality of means. In table 4.7above, the p-value (sig) was 0.000, which was less than 0.05 (P=0.00<α=0.05, t=10.945).Since the p-value was less than 0.05 (P<0.05) the null hypothesis was rejected. This indicated that there was a statistically significance difference in the post-test results between the experimental and the control group, after administration of the treatment to the experimental group, which simply means that the difference in the mean score (25.69%) was statistically significant. This result illustrated that the pupils who were assessed using rubrics in the three topics (experimental group) performed better than the pupils were assessed using conventional forms of assessment (control group).Hence, the conclusion that rubrics have a positive effect on Mathematics.

End of topic test results and discussion for the Experimental and Control Group.
The end of topics tests to both the experimental group and the control group, which included Probability, Geometric Transformations and Linear Programming. When comparing the two means from the experimental Mathematical Theory and Modeling www.iiste.org ISSN 2224-5804 (Paper) ISSN 2225-0522 (Online) DOI: 10.7176/MTM Vol.9, No.7, 2019 and the control group for each end of topic test, an independent sample t-test was also used and a normality test was also done using Shapiro-Wilk normality test. It follows that if the p-value from the spss output is greater than alpha (α) 0.05, (P >0.05), then data is normally distributed. If the p-value is less than 0.05 (P<0.05), then data is not normally distributed.  In table 4.8, the p-value was 0.063 for experimental group and 0.060 for the control group. In both cases, the pvalue was greater than 0.05,(P=0,063>0.05) and (P=0,060>0.05).This indicated that the test scores for probability from the experimental and the control group were normally distributed and it implied that the independent sample t-test could be used the data.  Table 4.9 above shows the descriptive statistics for probability which includes sample size N=55 for experimental and N=55 for the control group, Mean=67.52% for experimental and Mean=53.36% for the control group, which indicated that the experimental group performed better than the control group in probability. The difference in mean was 14.17% which could not have happened by chance. The standard deviation (Std) was 21.44 for experimental and 22.87 for the control group, which shows that the deviation in the data was a little wider for the control group than the experimental group.  In table 4.10, the first section of the independent sample t-test shows the levene's test for equality of variance for probability. The independent sample t-test requires the assumption of homogeneity of variance (equal variance).At significance level at alpha (α) =0.05, in table 4.10 above, the p-value was 0.724 which was greater than 0.05 (P=0.724>0.05), this implied that the null hypothesis was not rejected and concluded that the data had equal variances hence the reason to rely on the first row of the SPSS output. The second section from the independence sample t-test in table 4.10 was the test for equality of means .The p-value (sig) in table 4.10 was 0.001 which was less than alpha (α) = 0.05 (P=0.001< α=0.005, t=3.349).Since P<0.005, the null hypothesis was rejected and concluded that there was a statistically significance difference between the mean scores for the experimental and the control group. The mean score for the experimental group was 14.16% more than the average mean for the control group, which implied that pupils who were assessed using rubrics (experimental group) performed better than those who were assessed using the conventional forms of assessment ( control groups).   In table 4.11, the p-value was 0.333 for experimental group which was greater than alpha (α)= 0.05 (P=0.333 >0.05) and 0.551 for the control group, which was also greater than alpha (α)=0.05 (P=0.551> 0.05). In both cases, the p-value was greater than alpha (α) = 0.05.This indicated that the test scores for geometric transformations from the experimental and the control group were normally distributed and it implied that the independent sample t-test could be used on this data.  Table 4.12 above shows the descriptive statistics which includes sample size N=55 for experimental and N=55 for the control group, Mean=56.3% for experimental and Mean=54.4% for the control group, which indicated that the experimental group performed slightly better than the control group in geometrical transformations. The difference in mean was 1.9% and the standard deviation (Std) was 22.14 for experimental and 19.18 for the control group, which shows the deviation in the data with a little wider for the experimental than the control. In table the first section of the independent sample t-test shows the levene's test for equality of variance in Geometric Transformations. The independent sample t-test requires the assumption of homogeneity of variance (equal variance).At significance level alpha (α) =0.05,table 4.13 above, the P-value was 0.263,(P=0.263>0.05), this implied that the null hypothesis was not rejected and concluded that the data had equal variances hence the reason to rely on the first row of the SPSS output. The second section from the independence sample t-test in table 4.13 was the test for equality of means .The p-value (sig) in table 4.13 was 0.636 which was greater than alpha (α)= 0.05,(P=0.636> α=0.005, t=0.474).Since P>0.005, the null hypothesis (H0) was not rejected and concluded that there was no statistically significance difference between the mean scores for the experimental and the control group. The mean score for the experimental group was 1.9% more than the average means for the control group, which implied that, both the performance of pupils who were assessed using rubrics and those who were assessed using conventional forms of assessment were almost the same.  Vol.9, No.7, 2019 In table 4.14, the p-value was 0.082 for experimental group which is greater than alpha (α)=0.05 (P=0.082>0.05) and 0.522 for the control group, which was greater than 0.05 (P=0.522> 0.05).In both cases, the p-value was greater than 0.05.This indicated that the test scores for probability from the experimental and the control group were normally distributed and implied that the independent sample t-test could be used on this data.  Table 4.15 above shows the descriptive statistics, which includes sample size N=55 for experimental group and N=55 for the control group, Mean=57.9% for experimental and Mean=46.4% for the control group. This indicated that the experimental group performed better than the control group in linear programming. The difference in mean was 11.5% which could not have happened by chance. The standard deviation (Std) was18.19 for experimental and 22.05 for the control group, which shows the deviation in the data with a little wider for the control than the experimental group.  In table 4.16, the first section of the independent sample t-test shows the levene's test for equality of variance for linear programming. The independent sample t-test requires the assumption of homogeneity of variance (equal variance).At significance level alpha (α)=0.05, the p-value was 0.079 which was greater than 0.05(P-value =0.079>0.05), this implied that the null hypothesis was not rejected and concluded that the data had equal variances hence the reason to rely on the first row of the SPSS output. The second section in table 4.16 shows the test for equality of means .The P-value (sig) in table 4.16 was 0.004 which was less than alpha (α)=0.05 (P=0.004< α=0.005, t=2.981).Since P<0.005, the null hypothesis was rejected and concluded that there was a statistically significance difference between the mean scores from the experimental and the control group. The mean score for the experimental group was 11.49% more than the average mean for the control group, which implied that pupils were assessed using rubrics (experimental group) performed better than those who were assessed using conventional forms of assessment ( control groups).

The Relationship of Test Results between Experimental Group and Control Group Figure 4.1 Relationships of Test Results between Experimental Group and Control Group.
In this study, it has been shown from figure 4.1 that the mean score from a post-test and end topic test results that the experimental group achieved significantly higher performance than the control group. The significant difference in mean scores between the two groups is an indication that a rubric is an effective assessment tool in Mathematics at secondary school level.

Discussions and Findings.
The findings of the study are discussed thematically with reference to the research objectives,

A rubric as an assessment tool
Based on qualitative analysis from teacher questionnaire in table 4.1, the study revealed that using rubrics in assessment was not a common practice by teachers as it was indicated by approximately 33.3% of the teachers and 66.7% revealed that using rubrics as an assessment tool in the teaching and learning of Mathematics was very necessary. The findings of this study revealed that using a rubric as an assessment tool made the pupils aware of what the teachers could be looking for in a particular task, which is an ingredient of an effective assessment tool and this accounted for 87.3% as shown in table 4.1.Similar findings were reported by .According to them, using rubrics in assessment increased pupils' awareness of learning goals, clarify teachers' expectations and explain the critical needed to meet a quality performance. The responses from the questionnaire also revealed that (65.5% of teachers) using rubrics as an assessment tool enhances pupils' learning. Similar positive findings were evidenced in a study conducted by Arrufat et al (2014), who concluded that using rubrics in assessment has the potential to enhance the students' learning, has it requires a high quality answer from the students, hence the need for students to study more, in order to solve any Mathematical problem according to the rubric and not mechanically. This study also revealed that promoting peer assessment and self-assessment, makes a rubric effective, which accounted for 50% as indicated from table 4.1.These findings are strikingly similar to those found by Dochy et al (2006), who also concluded that self and peer assessment is inseparable from any assessment that is aimed at improving learning. However, another aspect of an effective assessment tool was an issue of making pupils' active in their learning, influenced by the constructivist notions of teaching and learning which emphasize on the importance of the active involvement of learners in constructing knowledge for themselves, with the learners at the forefront of learning, which accounted for 66.7%.Additionally,it was also observed during the classroom observations that pupils who were exposed to rubrics were more active in the learning process than those who were not exposed to rubrics .Supported by Goodrich (1997) who also concluded that when pupils 'are active participants in the teaching and learning process, they develop a sense of responsibility, which increases the ability of pupils to judge a quality performance and facilitates constructive and self-regulated learning. Finally, the study revealed that as a rubric help to identify the strengths and weaknesses of pupils, it provides feedback to both the teacher as well as the learner, making it an effective assessment tool and this accounted for 83.3% as shown in table 4.1.This is in agreement with the study conducted by Andrade (2000) ,who also concluded that the main purpose of the rubrics is to provide pupils and teachers informative feedback about their work in progress and  Vol.9, No.7, 2019 21 this provides a strong link between assessment and performance.Therefore,it is only through effectively assessing pupils with an effective assessment tool like a rubric, that teachers can tailor instruction directly to individual pupils' needs.

Classroom assessment tools employed by mathematics teachers.
Based on the responses from classroom observations, this study revealed that teachers of Mathematics had limited ways and methods of assessing their pupils. The teachers mainly used tests to assess their pupils, which accounted for 66.7% of the observed teachers, only one teacher was seen to employ group work during lessons in class and Egodawatte, 2000) pointed out that observing pupils during group work and classroom observation as a form of assessment is equally important, since some pupils are better able to show understanding in verbal situations than in formal or written work. He further commented that teachers need to use various assessment tools in order to determine pupils' Mathematical skills and produce a comprehensive picture of academic achievement. Such assessment tools include journals, concept maps, portfolios, rubrics among others. However, the findings from the classroom observations and the teacher questionnaire lead to a conclusion that tests and exercises are the type of assessment tools that were commonly used in Mathematics classes at Malela Secondary School. Furthermore, Biggs (1999) argues that what and how pupils learn depends on how they think they will be assessed. This means that in most cases, pupils will only focus on learning the skills that will permit them to do well in class. If the only forms of assessment tools used in the classroom are tests or exercises, the pupils will memorize the factual information that they need to know in order to get a good grade, forgetting much of the factual information a week later, (Mazur, 2014). This is where alternative assessment should come in .Alternative assessment tools, meaning an alternative to standard tests and exercises which should provide a true evaluation of what the pupils have learned, going beyond acquired knowledge to focus on what the pupils have actually learned by looking at their application of the knowledge.

Impact of rubrics on academic performance of pupils in Mathematics
To test for the research objective as to whether the use of rubrics had significant impact on the performance of pupils in Mathematics, an independent sample t-test was set in two tailed normal distribution at significance level, α =0.05 and the null hypothesis (H0) that there was no statistically significant difference in pupils 'performance in Mathematics between the pupils who were exposed to rubrics (experimental group) and those who were not exposed to rubrics (control group).According to table 4.3 the results for the descriptive statistics of the pre-test showed that the mean from the experimental group was 31.8% and for the control group was 29.1%, with the mean difference of 2.7%. After comparing the two means, using an independent sample t-test at significance level at α=0.05, with the null hypothesis as H0:µ1=µ2 (Experimental mean =Control mean). Table 4.4, showed that the p-value (sig) was 0.300 which was greater than alpha (α)=0.05,(P=0.300>α=0.05,t=1.04).Since P>0.05,the null hypothesis (H0) was not rejected and concluded that there was no statistically significance difference between the mean score from the experimental and the mean score from the control group .This implied that the experimental group and the control group started at the same level as no group was better than the other. According to table 4.9 the results for the descriptive statistics of the end of probability test showed that the mean from the experimental group was 67.5% and the mean from the control group was 53.4%, which indicated that the experimental group outperformed the control group in probability, with the mean difference of 14.2%.The two means from the experimental group and the control group were compared using an independent sample t-test, at significance level at alpha (α) =0.05,with the null hypothesis as H0:µ1=µ2 (Experimental mean =Control mean).The results from table 4.10 revealed that the P-value (sig) was 0.001 which was less than alpha (α)=0.05,( P=0.001<α=0.05,t=3.349).Since P<0.05,then the null hypothesis (H0) was rejected and concluded that there was a statistically significance difference between the group which was assessed using rubrics (experimental mean) and the group which was assessed using the conventional forms of assessment ( control mean) in probability. This implied that the mean difference was not by chance and that using a rubric to assess pupils work had an impact on pupils' performance in probability. According to table 4.12 the results of the descriptive statistics for end of geometric transformations test showed that mean for the experimental group was 56.3% and the mean of the control group was 54.4%, which implied that the group which was exposed to rubrics (experimental group) performed slightly higher than the group which was not exposed to rubrics (control group).The mean difference between the two groups was 1.9%.The two means from the experimental group and the control group were compared using an independent sample t-test and significance level at alpha (α) =0.05 with the null hypothesis,H0:µ1=µ2 (Experimental mean = Control mean). Table 4.13 showed that the P-value (sig) was 0.636 which was greater than alpha (α) 0.05,(P=0.636>α=0.05,t=0.474).Since P>0.05,the null hypothesis (H0) was not rejected and  Vol.9, No.7, 2019 22 concluded that there was no statistically significance difference between the mean scores of the experimental group and the control group and the mean difference (1.9%) occurred by chance. This implied that the impact of assessing Geometric Transformations using rubrics was not very different from the impact of conventional forms of assessment had on pupils' performance in geometric transformations. Table 4.15 Shows the descriptive statistics of end of Linear Programing test and that the mean for the experimental group was 57.9% while the mean for the control group was 46.4%.This implied that the group which was assessed with rubrics (experimental) outperformed the group which was assessed using the conventional forms of assessment (control) and the mean difference was 11.5% which could not have happened by chance. The two means from the experimental and the control groups were compared using an independent sample ttest, at significance level α=0.05, with the null hypothesis as H0:µ1=µ2 (Experimental mean =Control mean).The results from table 4.16 showed that the P-value (sig) was 0.004 which was less than alpha (α)=0.05, (P=0.004<α=0.05, t=2.981).Since P<0.05, the null hypothesis was rejected and concluded that there was a statistically significance difference between the experimental group and the control group and that the difference in mean (11.5%) was indeed not by chance. This implied that assessing linear programing using rubrics (experimental group) has positive impact on the performance of pupils than assessing using the conventional forms of assessment (control group).For the post -test results (Table 4.6) showed that the mean for the experimental group was 59.8% and the mean for the control group was 34.1%.This implied that the experimental group performed better than the control group, with the mean difference of 25.7%.After comparing the two means from the post-test results using an independent sample t-test at significant level with alpha (α) =0.05, with the null hypothesis as H0:µ1=µ2 (Experimental mean=Control mean),table 4.7 showed that the P-value (sig) was 0.000 which was less than 0.05, (P=0.000<α=0.05, t=10.945). Since P<0.05,then the null hypothesis (H0) was rejected and concluded that there was a statistically significant difference between the experimental group and the control group and the mean difference (25.7%) was not by chance. This implied that using a rubric to assess Mathematics in general has a positive impact on pupils' academic performance than assessing Mathematics using the conversional forms of assessment. The findings in this study were in line a good number of researchers (Schafer et al, 2007, Arrufat et al, 2014, Lee et al 2005, Chukwuyenum, 2013, whose general view was that the understanding of Mathematics is enhanced with the use of a rubric, and this is translated into improved academic performance, especially when it is co-constructed with the pupils and made available before any given task. Similar findings were also presented in a study by Andrade (2001), who reported that when pupils are involved in co-designing the rubric, they tend to display deeper levels of thinking and that merely providing a rubric to pupils does not lead to better performance. He further summarized this issue by concluding that a well-designed rubric has potential not only to improve the quality of assessment, but also to enhance the teaching and learning process which translates into improved performance as they reduce uncertainty and ambiguity.

Conclusion.
Based on the interpretation and analysis of the study, it follows that the pre-test mean achievement score which was obtained before the implementation of the rubric as an assessment tool was 31.8% for the experimental group and 29.1% for the control group. After exposing the experimental group to rubrics, the mean achievement score increased to 59.8%, while the mean achievement score for the control increased to 34.1%.The improvement in pupils' academic performance, in an experimental group can be attributed to the use of a rubric in assessing each and every work that was done in the experimental group. This simply means that a rubric has a positive impact in Mathematics. Additionally, rubrics made the learning targets clearer and if pupils know what the learning target is, they are better able to achieve it and when pupils have the assessment criteria at hand as they are doing a task, they are better able to critique their own performance as well as their friends' work, in order to give each other specific feedback on their performance. Assessment process was made more accurate and fair by the use of a rubric in that a teacher is more likely to be consistent in his or her judgment by referring to a common rubric in reviewing each pupil's performance. A rubric also helped to fix judgment as it continually draws the attention of the teacher to each of the key criteria set so as to avoid variations in application of criteria from pupil to pupil. However, designing rubrics appears to be a difficult process, especially writing descriptions of performance at each level, hence the reason why rubrics should be developed together with the learners. On the other hand, if a rubric is poorly designed, it can actually affect the learning process negatively. Finally, the possible role that rubrics can play in a learner centered learning and assessment is related to a constructivist approach to learning and it provides a foundation for constructivist teaching. The researcher thus regarded using rubrics in class as a good foundation to impose constructivism in the teaching and learning process. Using a rubric as an assessment strategy was therefore seen as having the