Friday, March 24, 2017

Review / Invent to Learn: Making, Tinkering, and Engineering in the Classroom

While visiting a branch of our city's library system we don't often find ourselves at, I browsed the tiny section on education. I found Invent to Learn by chance, and though the title's font left me a bit skeptical, I decided to give it a try. I'm glad I did.


The premise of the book is that adopting principles of making into formal and informal education is both feasible and worthwhile. The focus is on maker projects oriented around fabrication, physical computing, and computer programming. The book starts with solid learning theory and goes all the way to how to actually teach with open-ended maker projects.

I'm admittedly an education nerd, so I was delighted to see the book start with some relevant education history, learning theory, and discussion on "thinking about thinking." Seymour Papert, creator of Logo among many other things, is a central figure threaded throughout.

Papert coined the term constructionism, which builds on a previously established theory of learning, constructivism. As defined in the book, constructivism is:
...a well-established theory of learning indicating that people actively construct new knowledge by combining their experiences with what they already know. Constructivism suggests that knowledge is not delivered to the learner, but constructed inside the learner's head.
On constructionism:
Papert's constructionism takes constructivist theory a step further towards action. Although the learning happens inside the learner's head, this happens most reliably when the learner is engaged in a personally meaningful activity outside of their head that makes the learning real and shareable. This shareable construction may take the form of a robot, musical composition, paper mâché volcano, poem, conversation, or new hypothesis.
The authors claim that constructionism is the learning theory that resonates most with the maker movement, and build the rest of the book on this idea.

The rest of the text covers what makes a good project, what 'making' means today, the three game-changers in making (the aforementioned fabrication, physical computing, and programming), the practical stuff of actually teaching through maker projects, and how to convince others that the maker approach is a good idea.

Many concrete resources and materials are described throughout the book. It was published in 2013, though, so a lot of the specifics are likely to be out of date. Nonetheless, the suggestions should serve as a good starting point. (As a side note, the book's website, inventtolearn.com, appears to have been hacked, so best not to visit it at the moment.)

Overall, if you are curious about having your students learn by making things (real or virtual), and want to get a taste of the theory behind why it might work as well as the practical suggestions on how to do it, it's worth checking this book out.

Thursday, March 2, 2017

Why I Prefer Processing as a First Language to Teach

Once upon a time, almost 5 years ago, I wrote a post about Python vs Processing as a first language to learn. It became my only post to appear on Hacker News and still gets plenty of hits. The reflections in that post were quite early on in my use of either language for teaching, though. Now that I've used both in several teaching contexts, I'd like to explain why Processing still holds top spot for me when teaching beginners about computer science and programming.

(from a demo in my CS1 course using Processing)

Before I begin, I'd like to say that just because I like Processing best doesn't mean that other options aren't also good. Python, for example, is still a good first language in my view; I even use it in some of my course designs and workshops, especially if someone is in a field that will likely only ever use Python. For younger audiences, Scratch is still my go-to. There are many other languages and tools I probably haven't even seen yet that have great promise, too. But for workshops and courses to introduce beginners in high school, post-secondary, and beyond, Processing remains my go-to.

Without further ado, here are some of the reasons why I love Processing...

Processing is easy to download, install, and run. When you open it, it looks clean and simple. You can ask learners to write in one line of code, press play, and see a drawing pop up. This matters, especially when you are aiming to reduce intimidation of learning to program among less traditional groups. I'm always looking to remove any barrier possible that might make a beginner "nope out" of computing.


The fact that Processing's main purpose is to create visual output is also a huge bonus. You need very little code to create something that is meaningful enough to show others. Running home to show your family that your code spits out a number on a console just isn't the same as showing a (usually interactive) visual program. On the console, it's harder to understand that the code does something interesting and that it took effort to do it.

A further advantage of visual output is the ability for learners to see more of what their own code is doing. When you perform a computation with a single result at the end, the code can feel a bit like a black box. If the answer is wrong, which part of the code caused the issue? At least when all your code contributes to what's on screen, you have some more ability to reason about which code is causing the output to be wrong.

Pedagogically, I have a strong preference for introducing as few individual concepts at a time. I don't like the approach of most textbooks and courses I see, where all the fundamentals (variables, branching, iteration, functions, etc) are introduced quickly up front with toy examples before finally getting to the more interesting projects/demos. At the same time, starting with something too complex right away is intimidating and would likely lead to cognitive overload (*cough* Pygame).

I find that Processing allows me to design demos of just the right complexity. Each new demo only needs max a few new concepts (be it a programming concept or something Processing-specific like adding images). I can show the demo, and start working on creating it, introducing concepts just-in-time. It's also possible to design lots of interesting exercises with the same minimal number of new concepts, allowing learners to focus on mastering a small number of things at a time. I find this more difficult to achieve with other languages, even if there is a visual component (for example, I found Python Turtle too restrictive to achieve this for anything longer than a short workshop, while Pygame is way too complicated up front).

As an example, here is a summary of my CS1 course design in terms of the demos I cover for each module and the concepts that are introduced (more detail, including slides and code):

  • Drawing pictures with Processing – a static image with several semi-complex entities (basic drawing, variables)
  • Interactive painting with Processing – an interactive painting program that allows you to change colours (active mode/interactions, basic functions, switch statement)
  • Jukebox – a music player with three buttons to start and stop songs (functions as abstractions, Boolean expressions, if statements)
  • Sheep AI – pet sheep that wanders toward your mouse, drinks tea when it reaches it, and emanates coloured rings when clicked; pictured above (state machines, while loops, arrays)
  • Foreign student data visualization – visualization of sortable data available on Canada's Open Government website (Strings, algorithms, state-only objects, constructors)
  • Social media set cover problem – contextualized example and visualization of the set cover problem for a small set of data (shared data and references, for loops)
For beginners I prefer teaching languages that are statically typed and generally more 'rigid' for lack of a better term. The more the compiler or interpreter can tell you about what you're doing wrong, the better. I've seen so much frustration from my students working in Python because they do weird things that are syntactically correct, but make no sense semantically. Of course this point doesn't limit the choice to Processing, but it's an added plus for me.

It can also be useful that Processing is Java, but with the tricky parts abstracted away in the IDE. Starting this way is great because you don't have to say things like "don't worry about what public static void means, I promise it'll make sense later." Yet students are actually learning Java syntax when they learn Processing, and can even start pulling in more advanced features of Java later on if desired. When a follow-up course is done in Java (as the CS2 course I designed is required to be), the transition is smooth. Even better, you can still use the Processing library in a 'regular' Java project (example), so you can incorporate it into those later courses if you want. For instance, in CS2, I do so to get students using a third-party library and to learn about MVC and event-driven programming (see more).

I could probably go on about how useful I've found Processing in teaching beginners, but I think this is a pretty good list for now. I'd love to hear about how you've used Processing in your own teaching!