Child Left Behind Assessment Design And Implemen
GRADING RUBRIC MUST BE FOLLOWED TO THE LETTER
Instructions
Part 1: Fieldwork
Develop assessments aligned to objectives for your educational setting. Design two exemplar assessments. The design of these assessments may be similar to those you already use in your own teaching practice or may differ from what you are currently using—as long as each assessment explicitly aligns with research-based approaches. Decide which research-based approaches to assessment you will use in the design of these assessments. Remember that you may not use the same approach for both assessments.
Part 1: Deliverable
For each of your exemplar assessments provide:
- A rationale for how the assessment will be evaluated (rubric, percentage, et cetera).
- An explanation of research best practices behind the assessment approach.
- A description of the learning objectives to which the assessment is aligned.
Also do the following:
- Describe the students who took or will take each assessment.
- Identify each assessment as formative or summative.
- Identify the research-supported approach on which each assessment is based.
- Include a copy of each assessment and, if applicable, the rubric or scoring guide.
Part 2: Fieldwork
Implement the assessments you designed in your educational setting. Collect the data from the assessments.
Part 2: Deliverable
For each implementation, evaluate your assessment implementations against research-based practices:
- Analyze the implementation experience.
- What went as expected?
- What did not go as expected?
- What would you do differently next time?
- Reflect on any student feedback.
- Analyze the extent to which the assessment experience aligned with your expectations in terms of what you have read in the professional literature.
Then, compare the two implementation experiences in terms of overall effectiveness, as measured by:
- The validity and reliability of the data collected.
- The “fit” with your educational setting. (Consider the culture of your educational environment, receptivity of colleagues and other stakeholders, administrative support, et cetera.)
Resources: Authentic Assessment
- The following articles about authentic assessment are available in the Capella library:
- Ado, K. (2013). Designing their own: Increasing urban high school teacher capacity for creating interim assessments. The High School Journal, 97(1), 41–55.
- Litchfield, B. C., & Dempsey, J. V. (2015). Authentic assessment of knowledge, skills, and attitudes. New Directions for Teaching and Learning, 2015(142), 65–80.
- Neo, M., Neo, K. T., & Tan, H. Y. (2012). Applying authentic learning strategies in a multimedia and Web learning environment (MWLE): Malaysian students’ perspective. TOJET: The Turkish Online Journal of Educational Technology, 11(3), 50–62.
- Jonsson, A. (2014). Rubrics as a way of providing transparency in assessment. Assessment and Evaluation in Higher Education, 39(7), 840–852.
Resources: Cultural Aspects of Assessment
- The following articles about authentic assessment are available in the Capella library:
- Fensham, P. J., & Cumming, J. J. (2013). “Which child left behind”: Historical issues regarding equity in science assessment. Education Sciences, 3(3), 326–343.
- Lew, M. M., & Nelson, R. F. (2016). New teachers’ challenges: How culturally responsive teaching, classroom management, and assessment literacy are intertwined. Multicultural Education, 23(3/4), 7–13.
Resources: Data in Assessment
- The following articles about authentic assessment are available in the Capella library:
- Martineau, J. A. (2006). Distorting value added: The use of longitudinal, vertically scaled student achievement data for growth-based value-added accountability. Journal of Educational and Behavioral Statistics, 31(1), 35–62.
- Begin reading the discussion on page 55 and then refer back to the data tables at the beginning of the article.
- Betebenner, D. (2009). Norm- and criterion-referenced student growth. Educational Measurement: Issues and Practice, 28(4), 42–51.
- Martineau, J. A. (2006). Distorting value added: The use of longitudinal, vertically scaled student achievement data for growth-based value-added accountability. Journal of Educational and Behavioral Statistics, 31(1), 35–62.