“The present is the least interesting time to live in.”

January 15, 2018 Leave a comment

“Have a glimmer of an idea; take it out 30 years where there is no possibility of worrying about ‘how am I going to get from where I am now to this idea?” That is the idea killer of all time: ‘how is this incremental to the present?’ The present is the least interesting time to live in.” — Alan Kay (at the 41 min 20 sec mark)

Alan Kay, 2015: Power of Simplicity

Categories: Uncategorized

My first CS class started with Turing machines

January 5, 2018 1 comment

My freshman year in Washington University in St. Louis began in the Fall of 1989. I took a class called CS135. It was required of CS majors, as well as majors in other fields that had an introductory CS course requirements. I have no recollection of what the class was actually called, and alas cannot find any references to it on the web.

The class was roughly divided into three equal parts: 1) Turing machines, 2) assembly language, and 3) Pascal.

The lab computers were early MS-DOS PCs of some sort, with a couple of 5 1/4″ disk drives. There were five per table, linked to a dot matrix printer via hardware switch.

Seriously, the class started with Turing machines. We were provided a Turing machine simulator, where we’d convert our state diagrams into a table, enter the input tape, and watch the states change and the Turing machine’s read/write head move back and forth and twiddle the symbols on the tape.

The second section of the class used a fake assembly language called SNORE. You’d type in your program and see which instruction was being executed and watch the registers change. Alas, I can’t find any information on SNORE on the web; this may partially be the result of the word SNORE (like Processing) naturally getting a lot of Google hits that have nothing to do with programming.

The final third of the class was Pascal, and was much more along the lines of what most intro CS classes at others schools were probably like at the time. (Pascal was all the rage in the 80s, largely because Philippe Kahn got the idea to sell Turbo Pascal for $50, when compilers from companies such as Microsoft cost hundreds of dollars, were slower, and overall provided a much less pleasant user experience.)

The second class, CS236, was entirely in Pascal, and covered more advanced topics like pointers. I tested out of CS236, and went straight to taking CS301, which was a typical “discrete math for CS folks” class. (I had never formally studied pointers, but after a few examples it was easy enough to figure out what they did and be able to step through and hand scratch a few programs using them.)

Looking back on CS135, almost three decades later, and almost two decades into teaching, I wonder what the designers of the class were thinking. I don’t particularly recall what my feelings were as a student — I probably I thought the Turing machines were interesting puzzles but wanted to get on with “real” programming. Of course, I see that the designers were going for a more CS-flavored version of the Patt & Patel Introduction to Computing Systems: From Bits and Gates to C and Beyond curriculum, starting at a low level and then building increasing layers of abstraction. Except Turing machines are of theoretical interest; no actual practical computer directly uses a Turing machine model of computation as its core. I can’t imagine that the Turing machines felt exciting and motivating to anyone who wasn’t already highly inclined towards CS — and there ware quite a few non-CS majors in the class.

In any case, CS135 was short lived, soon to be replaced by CS101, which was based on Scheme — but that’s the topic for another post.

So, dear readers: did any of you have an unusual introductory CS course? What do you think about starting out with Turing machines?

Categories: Uncategorized

Inbox Infinite

It’s May 9th, and I struggle under the weight 2389 unread messages, out of 5694 total, in my Georgia Tech e-mail inbox. The earliest unread message is from March 10th.

I’ve barely paid any attention to my e-mail at all for the past four weeks. There’s been stretches of three, four, or even five days where I haven’t looked at e-mail at all, and when I did, I either skimmed subject headings as I gave my iPhone screen one solid flick, or I searched specifically for a subject keyword I told my students to include.

The Culprits

Although I certainly get behind on e-mail here and there — just like everyone else — I’ve been unusually bad at it lately. I suspect that five things conspired:

1) Georgia Tech added two-factor authentication in order to log into the VPN. A while back they added a requirement that faculty be logged into the VPN to access e-mail through something like Apple Mail. Having to take the extra 30 seconds to log into the VPN was annoying, but not too bad. But requiring two-factor added a lot of annoyance to the process. I have to start a specific app on my phone called Duo, and hit the approve button in a limited amount of time for my laptop to connect to the VPN. I don’t always keep my phone with me when I’m working at the house, so to check my e-mail, I have to get up and hunt for my phone. I realize this seems like a small annoyance, but the net effect is it encourages me to put off checking e-mail. That said, I do see the wisdom of two-factor authentication; I’m pretty sure John Podesta wishes he had used it. (Of course, I can read and send e-mail on my phone, but I dislike reading any e-mail longer than two sentences on my phone, and I despise trying to compose e-mails on my phone).

2) The effective signal to noise ratio of e-mail is asymptotically approaching zero, and I’m not just referring to the battle against in spam. Even e-mails ostensibly of value — confirmations of a book order on Amazon, an announcement of a show by a band you love that doesn’t play out very often, etc. — merge into an e-cacophony. In the early 90s, few outside academia had email, and even when AOL’s dial-up service opened up e-mail to a wider audience, most of the e-mail you received came from people you knew and generally wanted to converse with. Even when companies started having websites to show their wares, you generally had to call and give them a credit card number the old-fashioned way, so most of your e-mail came from individuals, not companies. There was a time when starting your e-mail software (I used the unix “mail” throughout school, switching to the more sophisticated “pine” when I started my postdoc; I switched from “vi” to “emacs” at the same time) sometimes induced tingly anticipation. Now, starting an e-mail client generally fills people with a combination of disgust and dread.

This is analogous to your U.S. Postal Service mailbox. How often do you receive letters in your physical mailbox that you truly desire? Nowadays it’s pretty much bills and advertisements. As e-mail has gone the same way, communication with people you want to communicate with has moved to other venues like Facebook Messenger. I confess I’ve had colleagues ping me on Facebook to double-check that I received some e-mail. On one occasion, an associate editor of a journal messaged me on Twitter during a similar epoch when I was running so behind on e-mail that I was walking backwards. It seems like every new form of communication eventually gets jammed up with junk. People flee to a new medium, only to slowly realize that the junk followed them.

3) This semester was the maiden voyage of my senior-level special topics class “Guitar Amplification and Effects,” which included a laboratory component. Lab assignment are much more time consuming to create than pen-and-pencil homeworks. During the semester, I’d often run into road blocks while formulating a new lab, and I’d wind up deciding to put off the lab and just move on to the next bit of lecture material. Then, one day, I ran out of lecture material. The net effect was that all of the labs wound up scheduled near the end of the semester, and when I was waist deep in 400 volt power supplies and vacuum tubes, I wasn’t looking at e-mail.

4) I spent the Spring 2014 semester teaching at Georgia Tech Lorraine, our remote campus in Metz, France. Half way through that semester, I had a similar realization that I had fallen epically behind on e-mail. It occurred to me that I had previously developed a habit of using my phone to “clear out’’ obviously unneeded e-mail whenever I had a spare moment standing in line somewhere, so when I finally sat down at my laptop and pulled up my e-mail program, I had a reasonable set of e-mails to respond to. When we were in France, my wife and I bought cheap prepaid SIM cards with just voice and text messaging, so I didn’t have internet access on my cell phone. Hence, when I started up my mail client, I’d see a hundred messages instead of a few dozen, and I’d just shut it down and go on with whatever else I was working on.

Of course, I have internet access on my cell phone now, but I’m generally trying to spend less time fiddling with my phone and more time paying attention the world around me.

5) In past eras when I’ve found myself behind on e-mail, I’ve avoided starting my e-mail client out of a sense of shame. But I recently discovered that I’m just overall happier on those days when I don’t look at e-mail at all. I’m overall more productive when I dive into the deepest end of whatever I’m working on.

Taming the e-mail beast

In spite of my realization in point (5) above, avoiding e-mail entirely is unsustainable. I clearly need to get my inbox under control, and I need to set up some specific tools to do so. There’s always a tradeoff with such schemes. For instance, if you have to convert some data files from one format to another, you have to weigh the effort required to script such operations vs. the effort required to do the conversions “manually.” If you only have a few files to convert, it may be more time efficient to just do it the brute force way. But if you have a ton of them to convert, the time invested in scripting the operations reclassifies itself as time wisely spent.

The most obvious first step is to make a folder specifically for messages from the two heavy-traffic mailing lists I’m on: Synth DIY and Analogue Heaven. It seems that such e-mails already in my inbox need to be moved manually — doing so brought me down to 4268 messages, 1136 unread.

As a first cut, I created a “Smart Mailbox” called “GaTech Senders,” with the rule “from contains gatech.edu.” Smart Mailboxes don’t actually move e-mails out of your inbox; they’re basically search presets. This should allow me to quickly see e-mails from students, faculty, and staff. Some students and faculty use gmail instead of their official GaTech e-mail, so this would miss those, but I figure if any come up that are important I can add them to the rules somehow. (This might be tricky, since without digging in and mucking about with XML files, Apple Mail doesn’t have a way to mix “any” and “all” operations — i.e., you get logical “and” or logical “or,” but can’t create nested combinations of both without cluttering things up with “helper” mailboxes.) “GaTech Senders” shows 2148 messages, with 307 unread.

What I really want is to distinguish between e-mails that were specifically composed for me (or me and a small group of specific people) and broad announcements. If my School Chair or my Dean or someone in their administration writes a note to me as Aaron, I want that to pop up as higher priority than an e-mail announcing that a professor or student received an award. (This isn’t to say that the broad announcements aren’t important; they often contain changes to or clarifications of policies that I need to pay attention to.) Actually, that’s something that would be useful for all of my e-mail, not just e-mails from ramblin’ wrecks. So, I created a “To Me” mailbox with the rule “any recipient contains lanterma” (this covers my core eight-letter e-mail name as well as aaron.lanterman), as well as a “To Masses” smart mailbox with “any recipient does not contain lanterma”). “To Me” has 3145 messages, 809 unread, and “To Masses” has “1201 messages, 327 unread.”

Actually, “To Me” in its raw form misses a few; there’s another e-mail address for me that consists of “al” followed by 3 randomly chosen digits. There’s no way to add that to the “To Me” rules without giving up the inbox-only rule, because you have set the rule combination operation to “any” or “all.” But, I get very little mail at that address; almost everything I receive there is an automated message from our IT department, telling me that my password is expiring or providing a summary of what fell into a spam trap. So I’ll make a separate smart folder for that weird e-mail address, and add an “any recipient does not contain” line to cut it out of the “To Masses” list.

Now I can make two new smart mailboxes, “GaTech To Me” and “GaTech To Masses,” that contain appropriate intersections. “GaTech To Me” contains 1174 messages, 67 unread, and “GaTech To Masses” contains 974 messages, 240 unread.

It looks like “GaTech To Me” contains a lot of “false positives.” For instance, event announcements from Georgia Tech Professional Education, the School of Music, and the Center for Teaching and Learning appear with a “To: lanterma@ece.gatech.edu” field, so they appear in “GaTech To Me” even though they were broadcast to a wide list. So my new strategy still needs work, but it’s a start.

Categories: Uncategorized

Observations about Coursera’s “Fundamentals of EE,” by Don Johnson

October 12, 2014 Leave a comment

Around January of 2014, I finished watching all the lectures for Don Johnson’s Fundamentals of Electrical Engineering on Coursera. While sorting through e-mail from around that time, I came across some observations I sent to a few colleagues, which I refined to share here.

1) As far as I can tell, Don’s course is unique in the set of ECE intro courses. There’s no class at another university that I can directly compare it to. It’s sort of a cross between UIUC’s Analog Signal Processing class (a circuits/signals-and-systems hybrid) and Georgia Tech’s Introduction to Signal Processing class (which focuses on discrete-time signals-and-systems), with information theory thrown into the mix. Don compares analog and digital communication schemes in the context of channel capacity. The course covers both analog and digital processing, with the focus on signals as carriers of information. The scope of the class is breathtaking; the last time I looked at a class and got a similar mind-blowing impression of its depth was MIT’s old Structure and Interpretation of Computer Programs.

It’s utterly brilliant. (But, I have to be careful not to equate “something a professor thinks is really interesting” with “something that will gel with undergraduates.” These are not always overlapping sets.)

Don includes a lot of material on Fourier series and transforms and frequency responses, but he doesn’t include anything on Laplace transforms. The Laplace omission makes sense since there’s not a lot of emphasis on generic switched-voltage-source, capacitor charging/discharging examples that typically eat up a lot of the time in most traditional sophomore circuits courses. Don goes into the frequency domain quite early and generally remains there through the rest of the course.

The uniqueness of the course, to me, provides one of the strongest arguments in favor of MOOCs. If you want a standard sophomore circuits course, nearly every EE department offers one, and frankly, there won’t be much difference between the one at Georgia Tech (ECE2040) and its equivalents at Southern Poly. But before this MOOC, the only way to see this material assembled in this particular fashion with this particular vision would be to move to Houston.

2) The sound quality is utterly horrible, and it’s a revelation to me how much that effects my overall perception of the class. The fan noise from Don’s computer is quite evident, and although I don’t know for sure, it sounds like he’s using the built in mic on his computer. From lecture to lecture, or sometimes in the middle of the same lecture, the sound quality will suddenly change, as if someone was experimenting with different levels and noise reduction settings in some audio editing software package. I was alternating between Don’s course and some of Udacity’s courses, and the higher production quality of Udacity’s products is striking.

Magnus Egerstedt’s Coursera course on Control of Mobile Robots (basically a graduate linear systems theory course like Georgia Tech’s ECE6550, with neat material on robots added in) is on Coursera, but it was taped in the fancy recording studios at Tech, so the audio for his course is much better. That said, although the audio in Magnus’s course was mostly noise free, it was encoded at a relatively low level. I like to watch lecture videos while riding on the stationary bike, and with Don’s and Magnus’s courses, that was hard to do, even with my laptop volume control maxed out; but with Udacity’s courses, the sound was loud enough (perhaps they used professional limiting program like the Waves L1?) that it overpowered the sound of the bike.

I can’t remember who said this, but someone once noted that the main key to professional looking video is professional sounding audio.

3) The way the in-lecture quiz questions were handled was absolutely maddening. On Coursera, an introduction to the upcoming quiz question is embedded in part of the preceding lecture segment. In Don’s course they were clearly thrown in after the fact; sometimes, it almost feels like they interrupt what he’s saying mid-sentence. They’re jarring. It would sometimes take me a second or two to realize I was being quizzed, and the audio hadn’t stopped because of an internet slow down. I didn’t feel like they kept me engaged, the way the Coursera quizzes did; I felt like they interrupted the flow, and it was hard to get the vibe back after the sudden interruption. The most bothersome questions were the ones that were thrown in to correct errors made in the main presentation. They’d start with a phrase like “The instructor made an error when writing the node-voltage equations. What should the second equation have been…” Sometimes I wanted to yell at the screen. It sounds like a small, petty detail, but it’s interesting how many small details add up to create a perception, good or bad, of the experience.

4) While Don put together this course brilliantly, I’m not sure Don is the best person to present it in this format. He sometimes tries really hard to sound really excited, and he’s clearly putting in a massive amount of effort, but his voice could send a dozen kittens in Consumer Report’s laser pointer test center into the deepest slumber. He tends to trail off at the end of sentences, so sometimes the start of the sentence is above the threshold of the fan noise while the end of the sentence starts to dip below it. I realize some of that awkwardness probably stems from the unnaturalness of having to talk to an empty room, which I find to be tremendously difficult.

I should note that my comments above apply to the first Coursera offering of the class; they may have made improvements in newer offerings.

Categories: Uncategorized

The tyranny of semesters & the trouble with Coursera

October 3, 2014 1 comment

Throughout numerous news reports and blog posts, comments on those news reports and blog posts, and e-mail discussions prompted by them, many legitimate criticisms of methods of teaching and learning outside of the usual on-campus class structure have been raised. But those usual on-campus classes have their own limitations; we are just so accustomed to working around those limitations that they’re barely noticed. Why should these same limitations be mapped into the online space? Why do we keep putting horseshoes on our automobiles?

This leads us to the problem with Coursera: it has the word “course” in its name. I’m not saying the courses themselves are bad — the quality on Coursera varies wildly, but many of them are quite good, and I hope to do a “course” on Coursera at some point — it’s just that the very concept of a “course” is artificial.

Courses, along with time units like semesters and trimesters and quarters, are organizational artifacts borne of the practical need to allocate chunks of time associated with chunks of physical space and chunks of biology called “students” and “professors” and get them to line up in some way those chunks of biology can readily remember, like meeting TuTh at 2-3:30 or MWF 1-2, staring on a certain date and ending at a certain date. Are any of those optimal in any way? Can anyone tell me if there’s any research on whether three days a week for 50 minutes is better or worse for learning than two days a week TuTh? Maybe there’s some material that’s best learned MTuW of WThF. Can anyone tell me? Has anyone even asked?

What about time of day? I have heard rumors of the existence of “morning people,” and there are probably some faculty and students who fit in that category. But for students who are not in that category, I conjecture that an 2:30 PM class is going to be a hell of a lot better for learning than an 8:30 AM class. Has anyone studied that? In all the discussion about problem-based learning and flipped classrooms and clickers or whatever, if someone could show that simply not having class at 8:30 AM resulted in massive improvement in learning outcomes, would we change our scheduling to accommodate that finding? Or would we not even bother to ask the question, given how limited we are on classroom space, and just implicitly state — whether we mean to or not — that we believe that the material taught at 8:30 AM is less import than the material taught at 2:30 PM.

This connects to Lanterman’s Temporal Maxim of Education: Any method of online course delivery is superior to an in-person class that meets at eight-assclock in the morning.

Dear university professors reading this post: Did you stop learning after you finished your PhD? If not, how many things have you learned in the past decade that had specific start and end dates?

Categories: Uncategorized

“If everyone could make a living doing what they love to do, then what could be better?”

While watching the closing comments from the “Future of Education Panel” at Maker Faire 2011, I resonated with the answers to the question, “What is your hope for the future of education and technology?”

Mitch Altman: “I would love to see a world where everyone can truly explore what it is that they love to do. And if everyone could make a living doing what they love to do, then what could be better? That would mean we have a world full of many more people living fulfilling lives. And if we had a world where people could be encouraged to explore who they are and do what they love, then our education system would be tops.

Ben Heckendorn: “Expounding from that point, I grew up in a small town… think if it was 100 a years ago, what I do for a living, assuming they had graphics artists 100 years ago, `Mary’s Cow Powder’ advertisements in newspaper… you would never get out of your small town. You’re stuck there. You can do what you can do in that element, but that’s the limit of your scope. But the internet has opened up the world to everyone. Anything in the world is no further away than your computer. That is a great resource, and we’ve only started to scratch the surface of its potential.”

Jeri Ellsworth: “I’d encourage everyone to go out and become a mentor. I try to help out as much as I can.”

In her closing comments, the moderator Michelle Dobson said: “It’s time that we knock down the four walls of the classroom… The world is now the classroom… students will learn when they can be engaged in their learning; have teachers as facilitators of knowledge, not as the givers of knowledge, the holders of knowledge. We need to have authentic learning opportunities for students to succeed.

I was particularly moved by Altman’s vision: “I would love to see a world where everyone can truly explore what it is that they love to do. And if everyone could make a living doing what they love to do, then what could be better?” That, right there, is what I want my role as a researcher and an educator to be — to help make that world a reality.

Categories: Uncategorized

Almost every article you’ve read about MOOCs is full of s***

December 22, 2013 Leave a comment

I launched this blog two years ago this month with something of a bang. That was just before the launch of Udacity, Coursera, and other corporate purveyors of “Massive Open Online Courses,” catchily abbreviated as “MOOCs.” As is the case with with many blogs, it fizzled out a few months after that; my last post was in March 2012. I’ve decided it’s time to come back to it.

In the mean time, hundreds of articles, both pro and con, both enthusiastic and wary, have been penned about MOOCs and MOOCish things. They’ve been written by Pulitzer prize-winning journalists, corporate bosses, politicians, venture capitalists, and university professors and administrators up and down the academic ladder.

Almost all of these articles — both pro and con — are full of shit.

Even the ones that aren’t totally full of shit are at least partially full of shit.

The primary trouble is that even the authors who are most anxious to “disrupt” our current educational system are embedded so deeply in the status quo that they don’t realize how much they are taking for granted. They believe certain things “must be so” that don’t necessarily need to be so, and they suffer from meta-unawareness about what those things are. As Marshall McLuhan quipped, “I don’t know who discovered water, but it wasn’t a fish.*”

I am reminded of Alan Kay’s words, from The Early History of Smalltalk V:

I took the whole group to Pajaro Dunes for a three day offsite to bring up the issues and try to reset the compass. It was called “Let’s Burn Our Disk Packs.” I used the old aphorism that “no biological organism can live in its own waste products” to plead for a really fresh start… The reason I wanted to “burn the disk packs” is that I had a very McLuhanish feeling about media and environments: that once we’ve shaped tools, in his words, they hum around and reshape us. Of course this is a great idea if the tools are really good and aimed squarely at the issues in question. But the other edge of the sword cuts as deep–that inadequate tools and environments still reshape our thinking in spite of their problems, in part, because we want paradigms to guide our goals… I wanted to stop, dynamite everything and start from scratch again.

I will save the details for future posts. For now, just hop on board, and fasten your seatbelts — and leave the smoldering disk packs behind.

And remember: my posts will be mostly full of shit too.

Because I am one of the fish.

*I misattributed this to quote Alan Kay for years; I only today realized that Alan was quoting McLuhan.

Categories: Uncategorized

The moment you see a headline like this…

You’re an Engineer? You’re Hired: The unemployment rate in the field these days is a super-low 2 percent.

…you automatically know the comments section will be overflowing with unemployed engineers begging to differ.

Where is the disconnect? I fear it may be found in the quote in the article that includes the phrase “especially for young ones.”

Categories: Uncategorized

Welcoming two new horsemen of the Edupocalypse

February 13, 2012 Leave a comment

I’d like to introduce you to two new edubloggers: my Georgia Tech School of ECE faculty colleague Rob Butera, and Ed Booth, a Georgia Tech Computer Science grad who took my Multicore and GPU Programming for Video Games class back in 2008. (I love hearing about what my former students are up to.) Coincidentally, they both touch on similar topics about grading in their recent posts.

Ed has radically titled his new blog The Failure Machine, employing the interpretations of failure formulated by Seth Godin and Eric Ries. Ed asks: “Typical American high schools and colleges accept 70% accuracy on exams as a passing grade. Assuming that exams accurately assess understanding and proficiency…that means we push our students to the next level long before they have achieved mastery of their current material. If this happened in a professional sports franchise, the Major Leagues would be filled with players who struggle to compete. What does this say about how our schools prepare us for our careers?”

Rob has titled his blog No Curve, so you can guess what his first post is about. He writes: “Most of my blog posts will likely be about structural and methodological issues in education, why engineering education is under-research yet also misunderstood, and why the liberal arts have an identity crisis. But I feel this first post on this blog needs to explain the chosen domain name.”

Please drop by their blogs and make comments!

Categories: Uncategorized

Gamifying the Edupocalypse

January 22, 2012 2 comments

A couple of days ago, I attended an unconference-style “catalyst workshop” on Gamification for Education hosted by Georgia Tech’s Center for 21th Century Universities. I have some thoughts on specific topics that were brought up at the workshop, as well as some musings on games-(in,via,through,above,below,whatever)-education that occurred to me after the workshop, which I will address in future posts. But I first wanted to generally frame the discussion.

I automatically become skeptical and nervous when I hear the word “gamification,” since it often seems to imply applying tropes from games to Thing X, Y, and Z without a lot of focus on the particular nature of Thing X, Thing Y, or Thing Z. In particular, I duck for cover when words like “badges,” “points,” “achievements” (to use Microsoft’s term) and “trophies” (to use Sony’s term) start to get piled onto activities like brushing teeth and exercising. Ultimately everything becomes fungible — eat enough low-fat potato chips, get a free train ticket to Boise!

The clearest — and hence, most terrifying — articulation of the Omega Point of gamification I’ve seen is Jesse Schell’s DICE 2010 keynote, which was likely the tipping point after which the syllables “game-uh-fa-cay-shun” were on the lips of every Mad Man from New York to New Delhi, each one hoping to unleash their inner Skinner. Although Schell’s speech was widely heralded as a blueprint for a brave new cyberworld, and excitedly embraced by a slew of societal actors as a novel way to bring people around to their cause (whether that cause is buying soda or riding a bicycle), I found myself recoiling in horror. If you haven’t seen the talk, and don’t have the full 28 minutes, 19 seconds needed to see it in its entirety, just start at the 20 minute mark. By the 23 minute mark you’ll have an urge to voluntarily douse your keyboard in whiskey and set your computer on fire. By the 27 minute mark you’ll have an urge to involuntarily douse your computer in vomit and set your hair on fire. To be fair, it’s not entirely clear to me whether Schell was saying that his predictions were a cause for celebration, or whether he was merely pointing out that these things are coming, and encouraging his audience to be the Gamifiers instead of the Gamified.

But my visceral revulsion to a future in which every action is recorded, with some actions rewarded, along with questions about which specific powers would want to reward which specific behaviors, is secondary to the critique my colleague Ian Bogost made of Schell’s talk. All this chatter about leaderboards and progress bars neglects the true potential of “games,” which is to get people thinking about the underlying behavior of specific complex systems, whether those systems are dental, defensive, or democratic, and whether the subjects are cavities, castles, or countries. Bogost calls this procedural rhetoric:

Procedural rhetoric affords a new and promising way to make claims about how things work… video games can make claims about the world. But when they do so, they do it not with oral speech, nor in writing, nor even with images. Rather, video games make argument with processes. Procedural rhetoric is the practice of effective persuasion and expression using processes. Since assembling rules together to describe the function of systems produces procedural representation, assembling particular rules that suggest a particular function of a particular system characterizes procedural rhetoric.

Another way to understand procedural representation is in terms of models. When we build models, we normally attempt to describe the function of some material system accurately… Models of all kinds can be thought of as examples of procedural rhetoric; they are devices that attempt to persuade their creators or users that a machine works in a certain way. Video games too can adopt this type of goal; for example, a flight simulator program attempts to model how the mechanical and professional rules of aviation work. But since procedurality is a symbolic medium rather than a material one, procedural rhetorics can also make arguments about conceptual systems, like the model of consumer capitalism in Animal Crossing…

In response to Schell’s presentation at DICE 2010, Bogost wrote:

…games are not primarily comprised of incentives and rewards in the first place, not even the more unusual ones Schell presents in his talk. The heart of games is not points, but process. Games have the capacity to persuade us because they can depict perspectives on how things work, and they can give us insights into the complex and often ambiguous connections between them… the most ironic example Schell presented in his talk at DICE is that of the Ford Fusion dashboard. The growing plant in the dash holds promise not because it offers an incentive to drive in a fuel-efficient manner, but because it reveals the combinations of mechanical, electrical, and combustive processes that lead to fuel-efficient driving.

The Fusion driver does not jump with Pavlovian delight upon seeing a lively fern, but noodles with intrigue over the combinations of traffic patterns, driving, techniques, topology that lead to different results. She might ask questions like “Why does driving a certain way have an impact on fuel consumption,” and “How are neighborhoods and cities designed to encourage and discourage such driving?”

The sentence about “Pavlovian delight,” or I should say the lack thereof, is classic Bogostese. (I’m going to see how many times I can employ variations of the phrase “noodles with intrigue” over the next week.) My wife and I recently bought a Nissan LEAF, so questions such as those mentioned in that paragraph are particularly on my mind. There’s a gauge on the LEAF that tells you how economically (and hence, I suppose, ecologically) you are driving. Unfortunately the car doesn’t provide much feedback into exactly what factors go into that gauge — as far as I can figure out, the LEAF thinks I am driving most economically when I am at a stop light, and least economically when I am pressing the break or accelerator pedals, which I refer to as “driving.”

Wow — there’s actually a company called Badgeville, which calls itself “The Behavior Platform.” Ah, look here! “Reward customer and employee behavior with smart gamification techniques.” Excuse me a moment, I need to go score some Pepto-Bismol…

Categories: Uncategorized