AbstractThe formalism of a MOOClet provides a Framework for how instructors, engineers, and scientific researchers can conceptualize, design and use technology for digital education. It guides the alignment of instructional improvements in digital educational resources (like lessons, exercises, questions) with the advancement of scientific research on learning technologies. The Framework defines MOOClets as modular components of online courses that can be modified to create different versions, which in turn can be iteratively and adaptively improved and personalized to users through experimental comparisons that identify what is better, and for whom. This talk shows how the MOOClet Framework provides guidance in identifying MOOClets and augmenting existing platforms with a platform-independent layer that enables experimentation and personalization even when platforms do not provide native support. We present examples of experiments that improve learning from personalized worked examples and engagement in response to personalized emails in MOOCs. These studies also show how MOOClets can be automatically improved and personalized via an API using a broad class of machine learning and AI algorithms for reinforcement learning agents – such as multi-armed bandits and Markov Decision Processes. Speaker BioJoseph Jay Williams designs adaptive online modules which improve and personalize people’s education in complex real-world environments, by aligning the goals of instructors and platform developers with behavioral scientists conducting experiments, as well as machine learning researchers. Examples include increasing motivation for students solving mathematics exercises on Khan Academy and strategies for self-questioning that enhance learning from videos in MOOCs. He is a Research Fellow at HarvardX, the online learning research and development component of Harvard University. He is also a member of the Intelligent Interactive Systems Group in Harvard Computer Science, and leading the advisory board for an NSF Cyberinfrastructure grant to Neil Heffernan at WPI to crowdsource randomized controlled trials from psychology and education researchers that are being run in the ASSISTments online mathematics platform. He completed a postdoc at Stanford University in the Graduate School of Education in Summer 2014, working with the Office of the Vice Provost for Online Learning and Candace Thille's Open Learning Initiative. He received his PhD in 2013 in Experimental and Computational Cognitive Science from UC Berkeley's Psychology Department. As part of the Concepts and Cognition Lab he investigated why prompting people to explain "why?" helps reasoning, and in the Computational Cognitive Science Lab developed models of reasoning, decision-making and learning using Bayesian statistics and machine learning. Background
Also see:
Relevant Publications:
|