Sunday, June 3, 2012

Various Variables

According to the authors of the textbook, “one of the most frequent misinterpretations in statistics (and in education) is to infer that because two variables are correlated with one another, one variable causes the other” (Kubiszyn & Borich, 2010).

This is basically stating that you can not use statistics or variables and blame them and assume they have responsibility over other various scores, and that they other score would not exist in such as way if it was not for the first variable. This seems to be a dumbing of terms so to speak, because if we assume one variable depends on the other, they where do we begin to measure these variables? Where is the starting point, and which ones do we judge initially in our scoring and interpretation as the base counts. It is not bad to consider all numbers calculations, but it is a fallacy to think that they exist dependant of one another. In a lot of ways that is why more variables come into calculation and compliment each other through totaling scores, but in no other ways do variances of one part of the test depend on the other variables to be successful.

In addition to this, there are already several methods for calculating validity for variables as they stand individually. These methods are already concrete researched methods that exist for measuring the variables, anywhere from range, to the semi-interquartile range (SIQR), to the standard deviation, and to just deduce the variables relationship through shallow comparison defies the depths of calculations created to reach a more accurate means of the variables. Basically it is not plausible or valid to suppose that a variable for a statistic creates or paves the pattern for the other variable to correlate along with or be compared with statistically. Rather, all variables should be considered independent in the scoring process, and not dependant on one another. If a problem exists for one variable, then that particular variable needs to be evaluated and analyzed for changes and improvements in order to pinpoint the problem. It is sort of as if you have a child and when they hang out with a friend they both get in trouble. You can blame the friendship, and determine the friendship is the problem, but this does not take into account each individual children and how they are being disruptive on their own and in their own way, regardless of the other child. It is in our nature to be critical, but we need to also be analytical of problems and their underlining causes, instead of tending towards excuses or escapisms. But is very much an escapism to assume the score of variable are dependant upon one another

What are the potential dangers to assessment and learning when one mistakenly implies causality where only a correlation between two variables exists?
One of the biggest dangers of this type of mistaken analysis of variables, and assuming that one variable is failing or dependant upon another variable failing, or in other words failing on behalf of the other variable. If we constantly blame other things on the flaws of one thing, then we may not ever be able to properly fix issues and problems with variables. If there is an excuse for error, based on another assessment, then we are not taking into account the original individual assessments as they can be changed. We would be wasting time, and putting faults on the wrong thing and never be able to access the differences, since certain excuses cover up the failures of some assessments. In the textbook it states, “Just as you can expect to make scoring errors, you can expect to make errors in test construction No test you construct will be perfect-it will include inappropriate, invalid or otherwise deficient items. In the remainder of this chapter we will introduce you to a technique called item analysis. Item analysis can be used to identify items that are deficient in some way, thus paving the way to improve or eliminate them, with the result being a better overall test,(p.227). Which here it addresses real problems. The problem that if teachers cant learn to evaluate their own assessments, they how do assessment improve or become more in-line with the students needs. That is why it is dangerous to assume that one variable causes another variable problems, when really we need to assess the individual differences. By individual differences, I mean that all issues should be treated with equal objectiveness, and go though the same types of quantitative and qualitative item analysis, that all other assessments must adhere with. It is in the best interest of the students and their professors, to consider each test question assessment with or as it’s own variable. Again, as I said before, each student is original, and their improvements depends on themselves and teachers willing to go the extra mile. Otherwise, if all students are grouped together, or assumed to cause each other failure, them how is there a projection towards advancement and improvement with skills. Just as I said before, variables should be treated on individual bases and not lump together to make a whole, and to blame as a whole entirely. Instead each part of an assessemnt, even down to the individual scales, like short essay assessments, needs to be analyzed on its own, judged on its own term and statistical\outcomes, and not blamed on others, in order for effective improvements overall to come.

No comments:

Post a Comment