Learning models when chunking and collaborative annotations for existing videos
In Learning How to Learn [1](https://www.coursera.org/learn/learning-how-to-learn) professor Barbara Oakley helps students to improve better learning explaining the top-down and bottom-up approaches of learning. The bottom-up refers to chunking which relates to allowing patterns to be experienced by the brain, and keeping them excited in the working memory. The chunks may not make complete sense in the broader context right away; they are the building blocks — for example mini steps for a dance that you will eventually get the whole.
On the other hand, the top-down approach is the macro view, it relates to the attempt to glue all together — it’s a checkup of what you are learning and the attempt to create a connecting story involving many chunks in the whole. When the bottom-up encounters the top-down is when we have context.
Fast Clip being a platform for clipping videos, first chunking
The process of clipping an existing video, by user X, may not exactly have a strict goal such as annotating or providing the transcription. Before an aimed goal the user have an opportunity to iterate with the material and to create multiple chunks of data relating to the whole such as to clip the main story via defining the range of time (a period) and also tagging with meta-data. As the user traverses the video, and performs her interaction, she is chunking and also learning. Therefore, from an individual perspective, when clipping (or chunking) there is an annotation and auto-collaboration that enables the exercise of annotation with hands, with the voice, with the eyes —the user can type, can attach, can draw, mark/highlight and more; alongside the various timelines or a ranges related to a main timeline for an original video or subject of analysis and learning.
Fast clip as a platform for collaboration and contextualization
And, as indicated by professor Barbara, the focus in the chunks can also offer some difficulties in helping out the main context as it’s important also to allow the diffuse learning (not only focused learning).
That is where collaboration comes as a helping feature — exactly because other minds and perspectives can be brought the a collaborative timeline. Beyond auto-collaboration, other perspectives, augmented by other collaborators, are supportive to allow new connections and context in learning. Therefore FastClip may be supportive to improvements in learning.
Fast Clip Features Related to Chunking and Contextualization
Hypothesis — clipping range of time — allows the user to engage, first, in the simple activity of identification of the breaks and moments, such as when an interviewer is asking, to the end of the question. From that, he or she understands about what it is needed to do the annotation work, for example.
Hypothesis — productivity in taking a slice at a time — when a moment is already identified the user can better manage the effort to put in it. For example, it may be already indicated that the user does not need to see further sections.
Hypothesis — grouping future moments and contextualizing them. An interviewer may make a question and the subject’s answer may be comprised of 3 answers, of course all of them in sequence as the subject have narrated it. The identification of the 3 elements in series as part of a whole is a clear visual indication that one can see ahead of time.