validity and Reliability Tests

validity and Reliability Tests - Hallo sahabat the life of the muslim world, pada kesempatan kali ini, kami akan bebragi ilmu tetang islam yang berjudul validity and Reliability Tests, saya telah menyediakan semaksimal mungkin, artikel ini sehingga bisa bermanfaat untuk sahabat sekalian, maka dari itu jangan sungkan untuk komentar dan membagikan tulisa ini kempada yang lainnya.

Penulis : validity and Reliability Tests
judul artikel : validity and Reliability Tests

lihat juga


validity and Reliability Tests

validity and Reliability Tests


INTRODUCTION Background The interaction of teaching and learning is prioritized on the professionalism of teachers and student achievement by focusing on kebermutuan teaching resources. Knowledge transfer requires the unity of the components together to form the complexity of learning in the realm, whether cognitive, affective, and psychomotor as the foundation of formation of students' knowledge. The linkage between the components of the learning is executed in line with the learning context and guided by the goal to be achieved. Learning objectives have been designed, in accordance with the syllabus, a measure of student learning success. In harmony with the achievement of learning objectives, required periodic evaluation of the development of student learning outcomes. The results of student learning as an evaluation to measure the extent to which the level of student mastery of the teaching materials that have been submitted. Evaluation as a process of assessing overall educational attainment of the educational unit includes all businesses that produce success pursued in accordance with the purpose of education, namely to produce outputselaras with the field being studied. One form of educational evaluation which are concrete and can be seen from the numerical results of student learning. The results obtained through the assessment of student learning. Assessment as a process to determine whether the processes and results of a program of activities in accordance with the purpose or criteria established (Kelvin, 2009: 6) .No regardless of the form of evaluation, assessment closely associated with the measurement. Measurement generate data for the assessment process. As stated Kelvin (2009: 6), the quantitative aspects of the assessment obtained through measurement, whereas qualitative aspects such as the interpretation and consideration of the quantitative data of the measurement results. The measurement results generate descriptive data based on interpretation according assessment criteria that have been set. Considerations about the criteria which categorized both are expected to provide reliable information. In other words, the teacher demanded to prepare and good judgment so that the learning objectives that have been set can be achieved optimally. At the end of the lesson, teachers are expected to prepare test kits that can be accounted for. As expressed by Tuckman (in Nurgiyantoro, 2010: 150) that the assay must be accountable in terms of feasibility (appropriateness), validity, reliability, ketertafsiran (interpretability) and usability (usability). So, the main purpose of the assessment is used to determine the extent to which students mastered basic competencies after following a series of learning. As noted Tuckman, Purwanto (2011: 114) also agreed that as a measuring tool, the THB (tests Learning Outcomes) should qualify as a good measuring tool. Good measuring tool must meet two requirements, namely the validity and reliability. Purwanto explained that a valid THB is THB which measure exactly the state you want to measure. Instead, THB said is not valid when used to measure a state that is not precisely measured with the THB. Validity is closely related to reliability. Reliability or consistency of measurements required to obtain valid results, but the reliability can be obtained without having valid (Nurgiyantoro, 2010: 150). If the validity of the interpretation of the results relating to feasibility tests, the reliability of test results relating to the consistency of the test. The test results of the test are relatively fixed can be said that the test results are reliable / trustworthy, in the sense of competence tested in tune with students' mastery. Problem Formulation Based on the above description, the formulation of the problem is: What is the nature of the validity of the test? How does the nature of the reliability of the test? Purpose From the formulation of the problem above, the purpose of this paper is: Describe the nature of the validity of the test, which includes: understanding the validity of the tests, sorts the validity of the test, the factors that affect the validity of the test, and how to calculate the validity of the test Describe the nature of the reliability of the test include: understanding the reliability of the test, all kinds of reliability tests, factors that affect the reliability test, and calculating reliability of the test. CHAPTER II DISCUSSION Itself Understanding Test Validity Validity Validity is often interpreted validity (Thoha, 2001: 109). Validity is a quality that shows the relationship between a measurement (diagnosis) with the meaning or purpose of learning or behavioral criteria (Purwanto, 2002: 137). Meanwhile, according to Sukardi (2011: 3), which shows the validity is the degree to which a test measures what it intends to measure. A tool called a validity when the gauge measuring the contents worthy object should be measured and in accordance with certain criteria. That is a lack of compatibility between the measuring instrument with measurement function and target measurement. The validity of an instrument of evaluation, is nothing that indicates the degree to which a test measures what it intends to measure (Singarimbun & Effendi, 2011: 122). The validity of an evaluation instrument has several important meanings such as the following. Validity relates to the accuracy of the interpretation of test results or an evaluation instrument for individual groups and not the instrument itself. Validity is defined as the degree that indicates a category that can include categories of low, medium and high. The principle of a valid test, is not universal. The validity of a test to be considered by the researchers is that it is only valid for a specific purpose only. Valid tests for mathematic is not necessarily valid for other areas such as areas of engineering mechanics (Sukardi, 2011: 31). Validity has several characteristics, among others: 8 Referring to the results of the use of such instruments is not the instrument. Shows a degree or degrees, the validity of high, medium or low, is not valid or invalid. Not applicable general. A math test showed high validity for measuring skills count, but just being in measuring the ability to think mathematically, even lower in predicting success in mathematics for the foreseeable future (Sukmadinata, 2010: 228-229). Factors Affecting Validity There are two important elements in validity. First, it shows the validity of a degree, nothing is perfect, there are medium and some are low. Secondly, the validity is always associated with a decision or a specific goal. As the opinion of R.L. Thorndike and L.P. Hagen that the "validity is always in relation to a specific decision or use" .Meanwhile, Gronlund suggests there are three factors that affect the validity of the test results, namely: Factor Factor evaluation instruments evalusai administration and scoring of the answers learners Factor (Arifin, 2011: 247-248) Many factors can affect the results of the evaluation test is invalid. Some of these factors can be broadly differentiated according to the source, ie internal factors of the test, the test of external factors and factor derived from the learners concerned (Sukardi, 2011: 38-39). Factor derived from the tests. Some sources are generally derived from internal factors evaluation tests including the following. Tutorial tests were arranged with meaning is not clear so as to reduce the validity of the test. The words used in the structure of evaluation instruments, too difficult. Test items are constructed with the ugly. The level of difficulty of the test items is not right with the received learning material learners. The time allocated is not right, this includes the possibility of too less or too loose. The number of test items too little so it does not represent a sample of learning materials. Answer each item predictable evaluation of learners. Factor derived from the administration and scoring. This factor can reduce interpretation validitasi evaluation tests, particularly tests evaluations made by teachers. Here are some examples of factors that the source came from the administration and scoring. Processing time is not enough so that learners in providing answers in a situation in a hurry. Fraud in the test so it can not distinguish between learners who learn by doing fraud. Giving instructions of supervisors that can not be done at all learners. Mechanical scoring inconsistent, for example on an essay test, also can reduce the validity of the test evaluation. Learners are not able to follow the directions given in standard tests. Their jockeys (others not learners) who entered and answer given test item. Factors derived from the answers learners often happens that the interpretation of test items evaluation is not valid, because it is influenced by the answers to the learner rather than the interpretation of the items on the test evaluation. For example, prior to the test the learners become strained because teachers pengampu subjects known killer, fierce and so forth so that students who take the test many failed. Another example, when students test skills appearances, the room is too crowded or rowdy so that learners can not concentrate well. This all can reduce the validity of the evaluation instrument. Various Validity Validity in the opinion of some experts can be classified into several types, namely: the validity of the construction (construct validity), content validity (content validity), the predictive validity (predictive validity), the validity of such a (face validity) and the validity of concurrent (concurrent validity ). Validity Content validity of the content (content validity) is performed on the content validity testing to ascertain whether the achievement test items measure precisely the situation to be measured (Purwanto, 2011: 120). Content validity is the validity of the judging in terms of the content of the test itself as a measure of learning outcomes, namely: the extent to which achievement test as a measure of learning outcomes of students, they have been able to represent as a representative of the whole material or study material should diteskan (tested) (Sudijono, 2013: 164). According Guion (1977), content validity can be determined based on the justification of the experts. The procedure followed in order that the test instrument is valid, are: defining the lattice to be measured, determining the lattice to be measured by each question, and compare each item with a grille that has been set. A test is said to have content validity if certain measures specific objectives that are parallel to subject matter or content provided (Arikunto, 2010: 67). Therefore the material was taught listed in the curriculum content validity is often also called curricular validity. The validity of the content can be cultivated achievement since the time of preparing the matter by way of detailing the curriculum or textbook material. To arrange a test instrument that has content validity, the instrument should be prepared on the subject matter that has been studied learners (Widiyoko, 2009: 129-130). To determine whether the test instrument has content validity or can not be done by comparing the test materials with rational analysis conducted on materials that should be used in the preparation of the test instrument. If the test material has been matched with rational analysis that is done then analyzed test instruments already has content validity. Conversely, if the test instrument is not compatible with rational analysis that has been done then the test instrument does not have content validity. Discussion about the validity of the content is identical to the discussion on population and sample. If only the whole subject matter that has been given to the learner or has been ordered to be learned by the learners we think of as the population, and the contents of achievement test in the same subject we think of as the sample, the test result of learning in these subjects can be said to have has the validity of the content, if the content of the test, it can be a deputy representative for all subject matter that has been taught or have been ordered to be studied (Sudijono, 2013: 164-165). Testing the validity of the content can be done using one of three methods, ie to point the instrument, ask the expert judgment and correlation analysis items (Purwanto, 2011: 120). In testing the validity of the content can be seen also from the cognitive realm. In this study the matter will be analyzed based on Bloom's taxonomy of cognitive domains. In the cognitive domain of Bloom's Taxonomy includes six levels of ability (Daryanto, 2008: 103-112). Knowledge (knowledge) Knowledge is the most basic aspects in Bloom's taxonomy. Often referred to as the aspect of memory. In this person's ability level required to be able to recognize or determine their concepts, facts or istilahistilah, etc. without having to understand or use. Understanding (comprehension) Type of higher learning outcomes of knowledge is understanding. This capability is generally emphasized in the learning process. Learners to comprehend or understand what is being taught, to know what is being communicated and can utilize it without having to connect it with other things. The application or the application (application) application is the use of abstractions in concrete situations or specific situations. It could be theoretical abstractions, ideas or technical instructions. Applying these abstractions into a new situation called by application (Sudjana, 2010: 25). Analysis (analysis) In this person's level of ability required to describe a situation or particular circumstances into elements or components of the constituent. In this way the situation or the situation becomes clearer. Shape about which to measure these abilities are multiple choice and description. Synthesis (synthesis) In this level a person is required to be able to produce something new by combining the various factors that exist. Evaluation (evaluation) In this person's ability level required to be able to evaluate the situation, circumstances, statement, or a concept based on a certain criteria. What is important in the evaluation is to create the conditions such that students are able to develop criteria, standards, or measures to evaluate something. Six levels of ability of cognitive domains can be described as a pyramid below: Figure 2.1 System of Classification Bloom While the word operational work which shows the six cognitive above are as follows: Table 2.1 Verb operational Bloom's Taxonomy Study Capabilities Kata Operational Knowledge Mention back, memorizing , showed, underlining, sorting, expressed understanding explain, describe, make restatements, describe, explain, modify, give an example, adapt Application to Operate, demonstrate, calculate, connecting, prove, produce, showed Analysis Comparing, contrasting, separating, connecting, create diagrams / schematics, showing relations, questioned the Synthesis to categorize, combine, invent / create, design / designing, rearrange, weave, conclude, make patterns Ratings Retain, to categorize, combine, invent, create, design, arrange, rearrange, weave , connect, conclude, designing, making pattern, giving the argument. (Munthe, 2010: 40-42). Validity Construction Etymologically, the word connotes construction arrangement, frame or imaginary. Thus, the validity of the construction can be interpreted as judging the validity of the terms of the arrangement, frame or imaginary (Sudijono, 2013: 166). Borg and Gall (in Reksoatmdjo, 2009: 194) defines: "Construct validity is the extent to the which a particular test can be shown to measure hypothetical construct." In terminological an achievement test can be expressed as the tests have had the validity of construction, when the test results of the study in terms of composition, frame or rekaannya have been able to accurately reflect a construct in the psychological theory (Sudijono, 2013: 166). About the construction term in this psychological theory to explain, that the experts in the field of psychological forward the theory that states that the soul of a person learners can be broken down into several aspects or specific domains. Benjamin S. Bloom eg psychiatric elaborate in three aspects, namely cognitive, affective and psychomotor aspects. The validity of the construction is not intended that the test in question is already well regarded because of sentence structure, or sequence of numbers because the grain has been coherent, but that test results of new study can be said to have validity if the construction of the test items or items that build these test'm actually been able to precisely measure aspects of thinking as stated in the specific instructional objectives. Construction Validity refers to the extent to which an instrument measures the concept of a theory, namely that form the basis of the instrument. Definition or concept being measured comes from the theory used. Therefore, there should be a discussion on the basis of which the theory of the construction of an instrument. Validity of Comparative validity of comparison means fidelity than a test can be seen from the correlation of the skills that have been owned by the present moment in real terms. The means used to assess the validity of the comparison is to correlate the results achieved in these tests with the results achieved in similar tests which are known to have high validity (eg standardized test). The level of correlation coefficients obtained indicate the height of the low validity of the tests will we value the quality (Sudijono, 2013: 177). Predictive validity Predictive validity is the accuracy (fidelity) of a measuring instrument in terms of the ability of a test to predict his achievements later. How is the level used to assess the predictive validity of this is by looking for correlations between the values ​​achieved by children in these tests with the values ​​achieved later (Nurkancana, 2000: 128). How To Know Validity Test Instruments According Widoyoko (2014: 176), an item is said to be valid if the instrument has a big contribution to the total score. In other words, grain instrument is said to have high validity if the score in points have parallels with a total score. This alignment can be interpreted by correlation, so as to determine the validity of the formula used product moment correlation. Where: r xy = correlation coefficient between variables X and Y, or the validity coefficient X = score items Y = a score total N = number of respondents Interpretation correlation coefficient can be done in two ways: By looking at the count r and interpreted using the following criteria the value of r count Criteria 0.800 to 1.00 Very valid from 0.600 to 0.79 High .400 to .59 Quite 0.200 to 0.39 low .000 to .19 Very low By consulting the table r. If the count is smaller r r table, then the items were declared invalid. Conversely, if r count is greater than or equal to r table, then the items declared invalid (Arikunto, 2012: 89). The way of calculating the validity of the items can be seen in the following example. EXAMPLE CALCULATION VALIDITY GRAIN PROBLEMS The following data PAI test results given to 10 students with the amount of about 10 items.


Demikianlah Artikel validity and Reliability Tests

the life of the muslim world validity and Reliability Tests, mudah-mudahan bisa memberi manfaat untuk anda semua. baiklah, sekian postingan the life of the muslim world kali ini.

Anda sedang membaca artikel validity and Reliability Tests dan artikel ini url permalinknya adalah https://jumro.blogspot.com/2016/08/validity-and-reliability-tests.html Semoga artikel ini bisa bermanfaat.

0 Response to "validity and Reliability Tests"