Jeudi 7 juin dernier, dans un atelier du colloque EPAL 2018 à Grenoble, j’ai présenté, avec Émilie Besse, la manière dont nous avons déployé, à l’Espé de Grenoble, les MOOT (Massive Open Online Textbooks), en présence d’une dizaine de collègues, qui ont pu ensuite s’essayer à ébaucher en RestructuredText et Sphinx un contenu pédagogique de leur choix, pendant environ 1 h. Ce travail est réalisé dans le cadre du projet ReFlexPro. L’intérêt, malgré la difficulté des systèmes, a été manifeste.
This last Friday (June, 8th) I presented at the Grenoble Workshop on Models and Analysis of Eye Movements our framework for a Multimodal Analysis of Teaching Centered on Shared Attention and Knowledge Access, co-authored with Louise Héléna Aubineau, Dominique Vaufreydaz, and Jim Crowley. The abstract is below, and there are the [Slides].
The effects of teaching on learning are mostly uncertain, hidden, and not immediate. Research investigating how teaching can have an impact on learning has recently been given a significant boost with signal processing devices and data mining analyses. We devised a framework for the study of teaching and learning processes which posits that lessons are composed of episodes of joint attention and access to the taught content, and that the interplay of behaviors like joint attention, actional contingency, and feedback loops compose different levels of teaching. Teaching by social tolerance, which occurs when learners (Ls) have no attentional problems but their access to the taught knowledge depends on the teacher (T). Teaching by opportunity provisioning, when Ls can be aware on the taught content but lack access to it (e.g., lack of understanding), and T builds ad hoc situations in which Ls are provided with easier content. Teaching by stimulus or local enhancement, when Ls have fully access to the content but lack attention toward it. T explicitly shows content to Ls, slows down her behaviors, tells and acts in an adapted way (e.g., motherese). A variety of devices installed in a classroom will capture and automatically characterize these events. T’s and Ls’ utterances and gazes will be recorded through low-cost cameras installed on 3D printed glasses, and T will wear a mobile eye tracker and a mobile microphone. Instructional material is equipped with qrcodes so that Ls’ and T’s video streams are processed to determine where people are looking at, and to infer the corresponding teaching levels. This novel framework will be used to analyze instructional events in ecological situations, and will be a first step to build a ?pervasive classroom?, where eye-tracking and sensor-based devices analyze a wide range of events in a multimodal and interdisciplinary way.