+1-316-444-1378

ANSWER THE FOLLOWING QUESTIONS. FOR MULTIPLE CHOICES, SELECT AN ANSWER AND EXPLAIN IN 2-3 SENTENCES WHY YOU CHOSE THAT ANSWER. THIS ISN’T AN EXAM. BE SURE TO EXPLAIN YOUR ANSWERS IN 2-3 SENTENCES. USE THE SOURCES BELOW TO HELP

1. The degree to which test instruments measure what they are intended to measure is known as _______.
a.    reliability
b.    stability
c.    rigor
d.    validity

2. In a scatter diagram, if the cluster of data points slopes from upper left to lower right, this indicates
A) a positive correlation 
B) a negative correlation
C) zero correlation
D) a curvilinear relationship

3. T or F Content validity refers to the idea that enough questions were chosen from all possible questions, without regard to the subsets of questions in the pool.

4. Concurrent/convergent validity refers to:
a. expert opinion that your test is measuring what it is supposed to
b. how well your test predicts a criterion
c. how well your test correlates with an already-established test

5. What type of reliability is used when you are comparing people’s observations of a behavior?

6. Internal consistency is a measurement of:
a. reliability over time
b. validity
c. how each question correlates with every other question

7. Split half reliability is similar to:
a. validity
b. test-retest reliability
c. internal consistency
d. inter-rater reliability

8. T or F Face validity is whether the questions look like they refer to the construct in question.

9. Practice effects must be taken in consideration when measuring:
a. internal consistency
b. test-retest reliability
c. split-half reliability

10. T or F A test can be accurate without being reliable.

11. T or F Long questions are better on tests because you can cover multiple ideas with them.

12.If this item discriminates between people well:
a. all people should get it right
b. nobody should get it right
c. the number of people who get it right will fall as the grades go up
d. the number of people who get it right will rise as the grades go up

13. T or F The difficulty level of a question is the proportion of people who correctly answered the question.

14. An ICC is:
a. a validity measure
b. used in item analysis in IRT
c. discriminates between groups

15. Name one reason to make a test.

16. An essay question is a type of ____________________________ question.

17. T of F Differential item functioning analysis is used to detect bias in questions.

18. Which type of validity is considered the overall validity?

19. If I wanted to find only the high achievers in a group I would use:
a. an easy test
b. a hard test
c. a test that isn’t easy or hard

20. T or F The validity of the criterion in no way affects the validity of the test meant to predict it.

21. The degree to which test instruments measure what they are intended to measure is known as _______.
a.    reliability
b.    stability
c.    rigor
d.    validity

22. In a scatter diagram, if the cluster of data points slopes from upper left to lower right, this indicates
A) a positive correlation 
B) a negative correlation
C) zero correlation
D) a curvilinear relationship

23. T or F Content validity refers to the idea that enough questions were chosen from all possible questions, without regard to the subsets of questions in the pool.

24. Concurrent/convergent validity refers to:
a. expert opinion that your test is measuring what it is supposed to
b. how well your test predicts a criterion
c. how well your test correlates with an already-established test

25. What type of reliability is used when you are comparing people’s observations of a behavior?

26. Internal consistency is a measurement of:
a. reliability over time
b. validity
c. how each question correlates with every other question

27. Split half reliability is similar to:
a. validity
b. test-retest reliability
c. internal consistency
d. inter-rater reliability

28. T or F Face validity is whether the questions look like they refer to the construct in question.

29. Practice effects must be taken in consideration when measuring:
a. internal consistency
b. test-retest reliability
c. split-half reliability

30. T or F A test can be accurate without being reliable.

31. T or F Long questions are better on tests because you can cover multiple ideas with them.

32.If this item discriminates between people well:
a. all people should get it right
b. nobody should get it right
c. the number of people who get it right will fall as the grades go up
d. the number of people who get it right will rise as the grades go up

33. T or F The difficulty level of a question is the proportion of people who correctly answered the question.

34. An ICC is:
a. a validity measure
b. used in item analysis in IRT
c. discriminates between groups

35. Name one reason to make a test.

36. An essay question is a type of ____________________________ question.

37. T of F Differential item functioning analysis is used to detect bias in questions.

38. Which type of validity is considered the overall validity?

39. If I wanted to find only the high achievers in a group I would use:
a. an easy test
b. a hard test
c. a test that isn’t easy or hard

40. T or F The validity of the criterion in no way affects the validity of the test meant to predict it.