Human activity classification remains challenging due to the strong need to eliminate structural noise, the multitude of possible activities, and the strong variations in video acquisition. The current paper explores the study of human activity classification in a collaborative learning environment.This paper explores the use of color based object detection in conjunction with contextualization of object interaction to isolate motion vectors specific to each human activity. The basic approach is to make use of separate classifiers for each activity.
The paper introduces the problem of robust head detection in collaborative learning environments. In such environments, the camera remains fixed while the students are allowed to sit at different parts of a table. Example challenges include the fact that students may be facing away from the camera or exposing different parts of their face to the camera. To address these issues, the paper proposes the development of two new methods based on Amplitude Modulation-Frequency Modulation (AM-FM) models.
In this paper, we summarize some of the lessons learned from the Advancing Out-of-School Learning in Mathematics and Engineering (AOLME) project. The AOLME project uses an integrated curriculum that relies on the use of basic concepts from middle-school mathematics to teach the foundations of image and video representations. The middle-school students, mostly from underrepresented groups, learn how to program their own video representations using Python libraries running on the Raspberry Pi. Overall, we have found that the students enjoy participating in the project.