I had no sooner finished drafting a section of my senior thesis which hails Canada’s contributions to the massive learning phenom than a note arrived from George Veletsianos, Canada Research Chair at the Royal Roads University School of Education and Technology.
In it, Professor Veletsianos pointed me towards an e-book he and his students put together which documents personal experiences each of them had with various MOOC classes they participated in individually.
This ground-level reporting (with each student’s personal MOOC experience covered in an separate chapter) makes interesting reading, especially in light of Professor Veletsianos’ compelling argument that the nuanced attitudes and opinions of real human beings taking actual classes provides insight that cannot be gleaned by looking at “Big Data” (or, I’d add, speculating on pedagogical implications of this lecture format or that assessment technique).
Needless to say, the criticality of ground-level experience when trying to understand the pluses and minuses of new teaching and learning tools (including ones that claim to be able to teach 50,000 students and the same time) is something I’d be the last person to dispute. But why?
There is certainly a common sense appeal to the assertion that we need to put courses in front of students and see what they think before drawing conclusions as to how well they work (regardless of what the statistics are telling us). But there is also a philosophical argument to be made for stressing seemingly mushy human opinion over the clean elegance of the data point and algorithm.
This point that was made nearly fifty years ago when the brilliant computer scientists who were laboring to generate the first Artificial Intelligence breakthroughs were about to learn the limits of their discipline from a philosophy professor who happened to be an expert in the very thinker (Martin Heidegger) who would have understood the fool’s errand these AI researchers were on.
I was particularly struck by the fact that, among other troubles, researchers were running up against the problem of representing significance and relevance – a problem that Heidegger saw was implicit in Descartes’ understanding of the world as a set of meaningless facts to which the mind assigned what Descartes called values and John Searle now calls function predicates.
But, Heidegger warned, values are just more meaningless facts. To say a hammer has the function of being for hammering leaves out the defining relation of hammers to nails and other equipment, to the point of building things, and to our skills – all of which Heidegger called readiness-to-hand –and so attributing functions to brute facts couldn’t capture the meaningful organization of the everyday world.
This forty-page essay by Dreyfus runs through the entire intellectual drama (it was an easier read after getting three-quarters the way through a philosophy degree than it would have been before – so I recommend it primarily for computer scientists who need a shot of humility or philosophers who need a boost of self esteem). But the gist of his argument is that not only is it impossible to gather all the data and figure out all the rules needed to construct human experience in purely algorithmic terms, but that even trying to do so was an exercise in madness.
It was only when AI researchers abandoned their assumption that human experience was a mathematical code that needed to be reverse engineered that their field started to bear fruit.
And in less lofty domains, we should keep in mind that while man might not be the measure of all things, that feedback from actual human beings interacting with tools (including MOOCs) in the real world will provide insights that no computerized analysis will or can ever give us (regardless of how big a machine we build and how many clever programmers we assign to turn it into the new Oracle of Delphi).