[bull-ia] Call for Papers – Workshop about Optimizing Human Learning

* Apologies for cross-posting *

Please, kindly redistribute this CFP to all research relevant venues.

                                Call for Papers
                           Optimizing Human Learning
2nd International Workshop on eliciting Adaptive Sequences for Learning (WeASeL)
                      Kingston (Jamaica), 3 or 4 June 2019
Held in conjunction with Intelligent Tutoring Systems (ITS) 2019
3-7 June 2019, Kingston, Jamaica
Keynotes will be announced later.
                                Important Dates
Submission deadline:            16 April 2019 (23:59 AoE – Anywhere on Earth)
Notification of Acceptance:     23 April 2019
Camera-ready due:                  6 May 2019
Workshop date:               3 or 4 June 2019
What should we learn next? In this current era where digital access to knowledge
is cheap and user attention is expensive, a number of online applications have
been developed for learning. These platforms collect a massive amount of data
over various profiles, that can be used to improve learning experience:
intelligent tutoring systems can infer what activities worked for different
types of students in the past, and apply this knowledge to instruct new
students. In order to learn effectively and efficiently, the experience should
be adaptive: the sequence of activities should be tailored to the abilities and
needs of each learner, in order to keep them stimulated and avoid boredom,
confusion and dropout. In the context of reinforcement learning, we want to
learn a policy to administer exercises or resources to individual students.
Educational research communities have proposed models that predict mistakes and
dropout, in order to detect students that need further instruction. Such models
are usually calibrated on data collected in an offline scenario, and may not
generalize well to new students. There is now a need to design online systems
that continuously learn as data flows, and self-assess their strategies when
interacting with new learners. These models have been already deployed in online
commercial applications (ex. streaming, advertising, social networks) for
optimizing interaction, click-through-rate, or profit. Can we use similar
methods to enhance the performance of teaching? When optimizing human learning,
which metrics should be optimized? Learner progress? Learner retention?
User addiction? The diversity or coverage of the proposed activities?
What the issues inherent to adapting the learning process in online settings,
in terms of privacy, fairness (disparate impact, inadvertent discrimination),
and robustness to adversaries trying to game the system?
Student modeling for optimizing human learning is a rich and complex task that
gathers methods from machine learning, cognitive science, educational data
mining and psychometrics. This workshop welcomes researchers and practitioners
in the following topics (this list is not exhaustive):
– abstract representations of learning
– additive/conjunctive factor models
– adversarial learning
– causal models
– cognitive diagnostic models
– deep generative models such as deep knowledge tracing
– item response theory
– models of learning and forgetting (spaced repetition)
– multi-armed bandits
– multi-task learning
– reinforcement learning
                               Topics of Interest
–   How to put the student in optimal conditions to learn? e.g. incentives,
    companion agents, etc.
–   When optimizing human learning, which metrics should be optimized?
    –   The progress of the learner?
    –   The diversity or coverage of the proposed activities?
    –   Fast recovery of what the student does not know?
    –   Can a learning platform be solely based on addiction,
        maximizing interaction?
–   What kinds of activities give enough choice and control to the learner
    to benefit their learning (adaptability vs. adaptivity)?
–   Do the strategies differ when we are teaching to a group of students?
    Do we want to enhance social interaction between learners?
–   What feedback should be shown to the learner in order to allow reflective
    learning? e.g. visualization, learning map, score, etc.
–   What student parameters are relevant? e.g. personality traits, mood,
    context (is the learner in class or at home?), etc.
–   What explicit and implicit feedbacks does the learner provide?
–   What models of learning are relevant? E.g. cognitive models,
    modeling forgetting in spaced repetition.
–   What specific challenges in ML are we facing with these data?
–   Do we have enough datasets? What kinds of datasets are missing?
–   How to guarantee fairness/trustworthiness of AI systems that learn from
    interaction with students?
Short papers
    Between 2 and 3 pages, LNCS format
Full papers
    Between 4 and 6 pages, LNCS format
Submissions can be made through EasyChair, see https://humanlearn.io.
                                Workshop Chairs
Fabrice Popineau, CentraleSupélec & LRI, France
Michal Valko, Inria Lille, France
Jill-Jênn Vie, RIKEN AIP, Japan
                               Program Committee
François Bouchet, LIP6/Sorbonne Université, France
Fabrice Popineau, CentraleSupélec & LRI, France
Benoît Choffin, CentraleSupélec & Didask, France
Julien Seznec, lelivrescolaire.fr, France
Michal Valko, Inria Lille, France
Jill-Jênn Vie, RIKEN AIP, Japan