Mapping Student Learning:
From at least the 1980's, and especially since the passage of the No Child Left Behind act of 2001, assessment has become a watchword in higher education. Many academics have been involved in assessment on the macro level: writing departmental learning objectives, aligning these objectives to college or university objectives, and exploring ways to assess whether these goals have been met. I dare say, however, that for most educators the most meaningful form of assessment occurs at the level of classroom instruction. When we enter our classrooms, we carry with us assumptions about what students will take out of our course. In my case, for instance, I believe that students passing my World History course will walk away with a broad understanding of the major technological milestones of the last 10,000 years of human history and a greater appreciation of the impact of the environment on human affairs. But are these beliefs about our students valid? Are our courses really making an impact on students? And if so, is it the impact that we intended?
To answer these nagging questions, we academics employ a number of different techniques, though no one technique provides completely satisfactory results. Quizzes and tests administered over the course of the class are the most obvious means of assessing student learning. However, since quizzes and tests are generally factored into the student's grade, their results are colored by the fact that students may say what they believe the instructor wants to hear. In addition, questions on essays and quizzes are often leading questions demanding specific responses, and thus may tell us more about the instructor's beliefs than those of the student. The same observations also apply to course homework such as essays or term papers, and are further compounded by the problem of plagiarism, which would render such assignments highly dubious as a form of course assessment.
A different form of course assessment is provided by anonymous end-of-the-semester course evaluations, in which students are encouraged to speak candidly about their classroom experience. However, unless they are carefully designed, these generic course evaluations tell us more about the popularity of the instructor than about how the course affected the students. In addition, unless accompanied by a similar assessment conducted at the start of class, which is not commonly practiced, such assessments can tell us little about how the class changed student knowledge and aptitudes over the course of the semester.
Since I have never found these techniques, even in aggregate, completely satisfying in assessing the impact of my World History course on students, I resolved to try something new: to assess the impact of my class on students by means of concept maps the students had completed in my 2009 and 2010 World History classes. Concept mapping, which was first developed as an analytical tool in the 1970's to represent student learning in the sciences, allows students to brainstorm the connections between ideas, institutions, and other entities.1 In a typical concept map, a series of concepts are arranged visually for the viewer, with the relationships between them marked by positions in a hierarchical arrangement, arrows in a sequence, overlaps showing areas of interaction, and other linkages. Since each concept map is freeform and unique, I hoped that these concept maps would give me a more honest accounting of student views of history at specific stages of a student's intellectual development. I also believed that concept mapping offers a special advantage in Qatar, where I teach. Given the predominantly ESL nature of our student body, visual-based assessments might give us more insight into student learning than text-based assessments, especially in the freshman year when student writing skills are particularly underdeveloped.
I should note that using concept maps in a history class is by no means standard practice. Although concept mapping is sometimes used as an educational tool in fields such as biology, meteorology, engineering, decision studies, and even nursing, historians generally prefer to assess students through the written word, which is of course a historian's stock in trade. Indeed, I was introduced to the idea of concept mapping as a classroom tool, not from a history colleague, but in a workshop of the CMU Eberly Center for Teaching Excellence in Qatar. Subsequent research into the topic further convinced of the educational benefits of concept mapping. According to scholarly literature on the topic, concept mapping allows instructors to "elicit the student's prior knowledge" and then "identify how students' knowledge has been constructed after the completion of instruction." Concept mapping also encourages students to "organize knowledge into meaningful related chunks," and, by allowing "free response" by the student, provides authentic insights into "the students' knowledge structure."2
With these advantages in mind, I added two concept map assignments into my syllabus for the Fall '09 and Fall '10 World History classes, one at the very start of the semester, the other right after the final exam. The instructions of both assignments charged students to complete a concept map on the question "how does world history work?" at the start and at the end of the semester. Students were provided a Microsoft Publisher document "template" with instructions [Figure 1], a list a list of suggested elements for their mind map (society, economy, culture, politics, religion, ideology, environment, and technology) as well as a number of sample mind-maps reflecting different points of view (Marxist, Huntington's neo-conservatism, environmentalist, etc.) 3However, students were encouraged to be imaginative, and quite a few students took up this challenge. The result was a fairly diverse pool of concept maps which reflected not only different beliefs about history but also different schemes of organization, ranging from flow charts to Venn diagrams to diorama-style pictorial metaphors to everything in between. [Figure 2]
To ensure that these assignments were completed, but also to avoid contaminating the results, both concept map assignments were graded as pass/fail assignments. Any concept map submitted on-time and which showed a reasonable amount of effort and self-reflection on the part of the student was given full marks. In theory, students who submitted a rudimentary or inadequate concept map that did not clearly demonstrate their beliefs on world history would have been told to re-do the assignment. In practice, however, students embraced these assignments with gusto, and without a single exception submitted articulate, thoughtful, and highly personalized concept maps on the question of how world history works.
The resulting database of 328 concept maps collected over two years, I believe, provided me with a course assessment instrument that avoided of the limitations inherent in more conventional classroom assessment activities. Unlike quizzes and tests, these concept maps were not graded on content and thus could be expected to give a more honest accounting of student beliefs. What is more, the open-ended nature of the assignment avoids the problem of leading questions. In addition, the non-graded format of the assignment reduced the threat of plagiarism, since a student had little to gain from plagiarizing the work of others. Furthermore, unlike end-of-the-semester evaluations, these concept maps could demonstrate change over time. Finally, these concept maps, unlike standard end-of-the-semester evaluations, were open-ended and prompted students to comment on the content of the course rather than how much they enjoyed the course or liked the course instructor.
But how can quantitative data about student learning be teased out of a mass of documents containing qualitative, somewhat subjective, visual-based data? On this question, the existing literature was of little help. I could not find any guidance on how concept maps could be quantitatively assessed in the scholarly literature on the topic. Academic resource web sites were of little help either, since they provided rubrics to grade student work rather than evaluate course outcomes in general.4 Even the Eberly Center, which had inspired the use of concept maps in the first place, provided only very general guidance on this issue.
As a result, I was obliged to design a survey instrument from scratch. On the advice of Andreas Karatsolis, Carnegie Mellon University in Qatar's Eberly Center representative and resident expert on assessing visual material, I began with a random sample of 12 of the 328 concept maps I had collected and tried to tease out specific features that could be measured objectively and were common to many or most of the maps. With Andreas' help, I used that initial study to compose a rough survey instrument. We then selected another 10 concept maps at random, individually classified them according to our instrument, and compared notes. This lead to further revisions to the instrument, after which the same process was repeated.
The end result of this process was a highly refined survey instrument that, with only minimal interpretation on the part of the coder, could collect 9 different data points for each image:
All of the above data points were intended to quantify the complexity of the concept map. The last three data points, however, had a different function: to classify the substance of each concept map–in other words, the message the student intended to convey. This meant translating subjective images into objective, quantifiable data, which was no easy task. Andreas and I solved this problem, after much discussion, by indentifying 3 different types of elements—central elements, base or source elements, and capstone elements—all of which occupied positions of importance and special meaning within the diagrams. [Figure 3]
These fairly strict definitions, we believe, gave us a set of rules that could be applied objectively to the diagrams in order to generate numerical data concerning student intentions when constructing their concept maps.
Now that we had our research instrument, we applied for and received the necessary IRB approval to conduct research on human subjects (protocol # HS11–319). One that step was completed, we anonymized all student data and placed student concept maps into an excel worksheet via hyperlinks, noting with a code on each map whether it was produced at the start or end of the semester. We then randomized the hyperlinks, ran them all through our 9-point survey instrument, and then crunched the numbers. The results were interesting, and in some cases quite surprising.
Looking first at all elements in aggregate (central, base/source, and capstone combined), it was clear that students had certain preconceptions about history that were reflected in the selective use of elements from the template. [Figure 4] Overall, for example, students showed a clear preference for "society", which was used as an element in nearly half of all concept maps (159 uses out of 328 maps). This brought to mind the often-repeated truism that students in Qatar are drawn predominantly from "collectivist" societies, which privilege societal over individual needs.6 It would be very interesting to compare the results of the Qatar campus survey with a similar survey conducted with Pittsburgh campus World History students. In any case, no other element stands out to the same extent, and the difference between the second-most used element (environment) and the least used element (ideology) was only 41 responses.
The data gets more interesting when we break these generic results down into the specific types of elements, as defined by the role the element played in the diagram. As is clear from figure 5, student decision-making when placing elements into diagrams was anything but random, but rather followed clear patterns. Society, for example, was overwhelmingly seen as a central element, interconnected to all other elements, and not particularly as a base/source or capstone element. Religion, on the other hand, was seen predominantly as a base/source element–in other words, as a "root cause" of historical processes. Interestingly, both environment and technology were used frequently in both the "base/source" and "capstone" position. Students were clearly split on historical role played by these two concepts: some saw the environment and/or technology as determinative, others as outcomes of other processes.
So much for the aggregate data. What about change over time? What does the data drawn from the concept maps tell us about how taking my course changed student knowledge or perceptions? To determine that, we re-sorted the documents into "before" and "after" sets and once again crunched the numbers. I must say from the outset that I did not expect dramatic changes between the two data sets. The second diagram, after all, was composed by students during their busy finals week, so the temptation to copy or re-capitulate the original concept map must have been great. And indeed, by my count 21 students out of 164 re-submitted the first concept map without any alterations at the end of the semester. As a result, I did not expect the changes between the "before" and the "after" concept maps rise to the level of statistical significance. However, this is beside the point: I was not looking for statistical significance, but rather change that is significant to me for self-assessment and as a guide to future development of my world history course.
Looking first at the complexity of the mind maps, it is clear that the changes over the course of the class were fairly modest. [Figure 6] Students used slightly more text elements, picture elements, and interconnections in their end of semester assignment as compared with the concept map they wrote at the start of the semester. More significant in my mind was the more substantive jump in "original elements"—in other words, elements not on the assignment template. The number of such elements rose from about 1.49 per diagram to 1.91, a modest 28% increase. I interpret this to mean that students had picked up some additional content from my class and were more able to move beyond the generic concepts listed on the template, though admittedly not to the degree that I would have liked to see.
While the documents showed only a moderate increase in the complexity of student thinking about history, the changes in content were more striking. Looking first at all elements in aggregate, it is clear that student perceptions changed over the course of the semester. As mentioned above, I teach World History from a strong environmentalist perspective, consistently stressing human/environmental interactions and feedback mechanisms throughout the course. Clearly, as seen by figure 7, this focus had an impact on students. While society was the most commonly used element in the start-of-class concept maps, with religion as a strong second, environment had risen above both in student self-reflections by the end of the course. However, the most significant change—and a completely unexpected one—was the marked decline of "ideology" as an element of importance in the final concept map. Since I talk very little about ideology in my World History class, I'm not entirely sure what this means. I have two guesses at to why ideology declined so drastically in importance: either students used ideology in their start-of-the-semester diagrams without really knowing what it meant and then later discounted it once they understood it better, or else students were convinced of the importance of other types of elements over the course of the class and privileged them over ideology in their final reflections.
Breaking the data down into specific types of elements–central, base/source, and capstone –revealed even more about changes to student perceptions over the course of the class. As mentioned earlier, society was by far the most commonly-used central element in the concept maps, and this was true in both the pre-class and post-class reflection. In contrast, almost no students assigned the environment to a central position at the start of the class. By the end of the class, however, the number of students who assigned environment a central role in their diagrams had risen from 5 to 12, a 140% increase. [Figure 8] This was the single biggest change in between the pre- and post-class diagrams, and it is precisely the change that I had hoped would occur, since my World History course emphasizes the interplay between environment and economic, cultural,and technological factors. The spike in the use of the environment as a central element was not only significant to me, but is statistically significant to a high degree (χ2=7.119). Overall this finding was the most gratifying result of the study, tempered only by the wish that the jump had been even more substantial.
A similar story played out in the base/source elements. As mentioned earlier, religion was the students' first choice as a base or source element, and this preference changed only slightly between the pre- and post-class concept maps. What did change markedly, once again, was student perceptions about the environment, which rose from third place to first by the end of the semester, even overtaking religion as the most important driving force in human history. [Figure 9] Once again, this was a highly gratifying result which aligned closely with my hopes about how my course influenced student perceptions of history.
Finally, on the subject of capstone elements, the image that emerges is less clear. The most notable change was the sharp drop-off of ideology as a factor in human history, as discussed above. But ideology is not alone: out of the 8 elements most frequently used by students, all but 2 elements declined, sometimes dramatically, between the pre-and post-class reflections. [Figure 10] The only element to rise substantially in frequency was culture, which I suspect is a reflection of my consistent discussion of cultural and technological "memes" throughout the semester and my repeated insistence that memes wax and wane in relation to the deeper economic, social, and environmental forces that shape human society. Even so, the number of capstone elements used by students in this diagram dropped from 230 to 165, a drop of nearly 1/3rd. Once again, I'm not entirely sure what this means. To some degree, it suggests that students were more willing to go beyond the concepts on the template in their post-course reflections, placing low-frequency elements that I did not count in this study, such as "agriculture," "prosperity," and "globalization," in a capstone position. Or perhaps it indicates that student perceptions of history were more focused by the end of the semester, and were more likely to see historical processes as tending towards a specific direction rather than having divergent outcomes.
Despite such uncertainties, which might be resolved in later iterations of the experiment, overall our study of world history concept maps was a success. Admittedly, it was quite time consuming, requiring about 10 hours to create the survey instrument and another 15 or so to process the data. Nonetheless it was time well spent, since I believe this exercise gave me invaluable insights into student outcomes of my world history course. I now have a much better idea about Qatar campus student assumptions about human history when entering the course, in particular their tendency to prioritize society and to consider religion as a driving force in history. I can now demonstrate that that my course does in fact convince many students–though perhaps not as many as I would like–that the environment plays a central or driving role in human affairs. I am also confident that my students emerge from World History with a greater command of historical facts and concepts, and thus able to go beyond the generics of the template and embrace a more focused, personalized vision of the historical process.
Benjamin Reilly teaches at Carnegie Mellon University in Qatar. He can be reached at firstname.lastname@example.org.
1 Joseph D. Novak and Alberto J. Cañas "The Theory Underlying Concept Maps and How To Construct and Use Them",Institute for Human and Machine Cognition (2006). Accessed 7 December 2011.
2 Evangelia Gouli et. al., "A Coherent and Integrated Framework Using Concept Maps for Various Educational Assessment Functions." Journal of Information Technology Education, Vol. 2 (2003), pp. 216–219. See also John R. McClure et. al., "Concept Map Assessment of Classroom Learning: Reliability, Validity, and Logistical Practicality" Journal of Research in Science Teaching, Vol. 36, No. 4 (1999), pp. 475–492; Diana C. Rice et. al., "Using Concept Maps to Assess Student Learning in the Science Classroom: Must Different Methods Compete?" Journal of Research in Science Teaching, Vol. 35, No. 10 (1998), pp. 1103–1127; Joseph S. Francisco et. al, "Assessing Student Understanding of General Chemistry with Concept Mapping." Journal of Chemistry Education, Vol. 78, No. 2 (2002), pp. 247–257; Ian M. Kinshin and David B. Hay, "How a Qualitative Approach to Concept Map Analysis can be Used to Aid Learning by Illustrating Patterns of Conceptual Development." Educational Research, Vol. 42, No. 1 (2000), pp. 43–57.
3 The text of the directions reads as follows: Create your own mental model of how you think world history works! Which of the elements in the left column are most important in explaining the course of world history, in your opinion? Are any missing that you want to add? How do these elements interact, overlap, etc? You could arrange these in a hierarchy, in a Ven diagram (of overlapping bubbles), whatever you want! Change the sizes, delete, add, cut-and-paste, whatever. There are no right or wrong answers–just illustrate your own opinion as clearly as you can.
4 See for example the section on mind maps/concept maps at the University of Minnesota Digital Media center web site at <http://dmc.umn.edu/activities/mindmap/>, assessed May 27th, 2011; Michael Zeilik, FLAG: Classroom Assessment Techniques: Concept Mapping, on-line at <http://www.wcer.wisc.edu/archive/cl1/oldflag/cat/conmap/conmap7.htm>, accessed May 29th, 2011.
5 See http://www.visual-literacy.org/periodic_table/periodic_table.html
6 As with much of the Middle East, Qatar's index of individualism (IDV) has not been studied. However, many of our students are Indian (IDV 48), Egyptian (IDV 38), Malaysian (IDV 26), or Pakistani (IDV 16). These scores are typical of collectivist societies. In comparison, France, Britain, and the US have IDV ratings of 71, 89, and 91 respectively, all strongly individualist scores. See Geert Hofstede et. al., Cultures and Organizations: Software of the Mind (New York: McGraw Hill, 2010). Individualism vs. collectivism data was drawn from <http://www.clearlycultural.com/geert-hofstede-cultural-dimensions/individualism/>, accessed May 23, 2011.
|Home | List Journal Issues | Table of Contents|
|© 2013 by the Board of Trustees of the University of Illinois|
Content in World History Connected is intended for personal, noncommercial use only. You may not reproduce, publish, distribute, transmit, participate in the transfer or sale of, modify, create derivative works from, display, or in any way exploit the World History Connected database in whole or in part without the written permission of the copyright holder.