Learning Program Architect - Ep. 1
In this article, the term "program" is synonymous with curriculum.
Related experience(s)
If you were ever in an afterschool program (i.e. test-prep or subject matter course), you were probably placed in a level system that was carefully built and orchestrated by talented individuals.
Test-prep courses, like SAT prep or TOEFL prep, are usually divided into score ranges. Logically, this makes sense. The assumption is that learners with similar a score range lack competencies in like areas of the test. For example, most learners in the lower range lack basic test-taking skills such as note-taking or critical reading skills. However, the higher the score range, the more discrepancy in weaknesses: one might lack question type knowledge; another might have low test-day stamina. So, how do Learning Program Architects accommodate for this?
Scaffolding
Learning Program Architects define things at a higher level. The details are trusted to be filled at the learning centers. Hence, this requires effective communication and collaboration with personnel in operations and training to make sure program/level/lesson objectives are met. We are charged with relaying changes to program objectives and rationale. These are modified and validated by taking several approaches.
First, architects refer to scores. There are datasets that the ETS or College Board publish (e.g. 2017 Test and Score Data Summary for TOEFL iBT® Tests). We also utilize our learning centers to collect score reports from our past and current students. We then analyze our data to make sure the current level structure is consistent with the trends, hoping the underlying assumptions about the test and competencies have not changed. However, it becomes a bit tricky if the test developers overhaul a test. That's when all hell breaks loose. We frantically search or contact the developers for publications on test validations and section/component changes. We need to understand WHY they're making the changes. This, in turn, helps us to realign our objectives and rationale. Therefore helping us to advise the learning centers on new approaches. This could be done through a variety of methods using training, modified manuals, company-wide announcements, change of textbooks, overhauling our own student assessments (diagnostics and review tests), etc.
Second, we define and revise objectives and rationale from empirical data. Here, we're more interested in qualitative analyses of learner comprehension, current approaches in content delivery, issues in center operations, etc. These are collected directly from learners or facilitators. As you might have realized by now, standardized tests are relatively easy to work with. Most of the grunt work (i.e. statistical analyses) are done by the actual test developers. However, they don't know our audience; we do. Theoretically, one of your facilitator's duties is to customize the lessons accordingly to his/her students. The modifications in lessons and noticeable trends of learning habits are then reported to the learning center manager or faculty manager. This is all done via system input (on an LMS or a CRM) or verbal feedback, which are then accessed by the curriculum team at head office. Head office then analyzes the comments and student grades as well as student, teacher, and curriculum surveys. We even do class observations and interviews. Most of the time, this involves careful investigation. A lot of the feedback or complaints stem from sources that are not directly noticeable. Maybe the facilitator is not doing a good job; maybe the student has personal issues that hinder her/him from learning; or maybe our content just sucks. If or when we realize that the actual content is the problem, we look at various "learning input" methods of our program - one of which is the passage.
Third, we are constantly analyzing texts from the test or our materials. Have you heard of Lexile or Flesch–Kincaid readability tests? These are commonly used to quantify readability of texts based on word and sentence lengths. Once we quantify the grade levels and readability scores of texts, we do thematic analyses of passages to figure out which themes are acceptable or the most difficult for specific demographics. For instance, a passage on a historical event that is not taught in an Asian public school curriculum can be considered difficult. This is debatable because test-taking is NOT about previous knowledge. However, we're considering the psychology behind the test-taker's comfort level in dealing with a "foreign" topic. This hypothesis is compared with the trends and readability tests we did earlier. Another example is when our learners are primary school students. We have to consider the maturity level of the content because you don't want an angry parent calling staff about "sex" and "violence". Although passages are mostly neutral in tone, some concepts could pique a child's interest on, say, the human anatomy or sexual diversity in a culture. Knowing Asian Tiger Moms, we want to stray away from such content. I have seen students do extensive research on a given topic outside of the classroom and arrive at unrelated concepts. There was an instance where, at home, a student was caught looking at female genitalia on the computer. He mentioned that he was doing research on something he learned from class, which - without the given context - angered the mother. So, yes, we want to stray away from a passage that could be a gateway to sensitive details for an 11-year-old. I understand that this is a generalization and one could argue that there are many topics that could be considered as a "gateway". The main point here is that content sourcing requires empathy, critical thinking, information synthesis, and attention to detail (to say the least). The good news is that there are literally hundreds of thousands of passages we can work with and I have rarely seen passages that provide direct context to such "mature" and "sensitive" topics.
Lastly, we benchmark. This is the most common method and the most tempting. Small institutions rely on larger institutions' research. Whenever a larger institution makes a change, it's on everyone's radar. Some firms opt to follow the change, to the dot, because...they can. I know. It's unethical. This is where the integrity of the R&D/L&D department matters.
In the end, all of these analyses and methods help us to categorize patterns and group and sort them into different levels. We figure out commonalities, causalities, and correlations to devise a program structure. The structure is carefully divided into a hierarchy of different levels that progressively lead to the next. You might or might not have heard of Bloom's taxonomy, which is a good model for this process.
Variables
But let me present a scenario. A facilitator notices a downward trend in student progress. S/he reports this in the student progress notes and devises an action plan. Although the facilitator notices something is not working, s/he has to move on to the next lesson, hence being forced to ignore the issue. That's the caveat of having a structured curriculum. The facilitator cannot do anything to alleviate the known issues and the students get frustrated. This is a very loose scenario, but what I am trying to point out is, who is at fault? The implications in this scenario are almost limitless.
The facilitator could work out a compromise with the learner. This means deviating from the syllabus to focus on student weaknesses. Or, I have personally followed the syllabus and successfully remedied such issues. This could be by separating students into groups (I call them pods) or pairs so that they could learn from each other. The content or passage is the same but each group or pair could take different approaches. To illustrate, the pair that lacks skimming and scanning skills focus on Main Idea and detail questions. They work together to present to the class their methods.
Now, does the facilitator have the flexibility? Maybe. It depends on the center or firm culture. Does the firm want to foster such a culture? Is the firm confident that all teachers are competent? Because it might not want a newbie teacher to have too much authority over the content and methodology. A new-to-program teacher is still adapting to the company's teaching methodology, which is sometimes standardized. This takes us to my next question: is the business model teacher-centered, curriculum-centered, or system-centered? If the firm does want flexibility, who "allows" that? The center managers? The L&D department? The CEO?
I will be discussing more about this in my next entry where I delve into the process of instructor training.
NOTE: This entry contains personal anecdotes. No organization is responsible nor should be assumed responsible for the author's opinions or experiences. Some details have been omitted due to their direct relation to confidential company policies or practices.
Thoughts? Comments? Anything you would like to add? Feel free to comment or shoot me a message!
Comments