Adding the Human Element in Screencasts
Brooks Andrus has a good post and video about including the human element in screencasts. Brooks writes:
Screen video alone is not enough. You need to humanize your content by getting in front of the camera and engaging your audience. And no, I'm not talking about long-winded monologues either. Several 5-7 second talking-head elements can go a long way toward winning over and maintaining the interest of your audience. Let people see your face and don't be afraid to be emotive / loose. Let them see the twinkle in your eyes and the smirk on your face. As social creatures its how we empathize and bond with each other.
Brooks starts his videocast just this way. He gets in front of the camera.
Brooks makes some other good points as well, such as shifting camera angles every 2-8 seconds to keep the audience's attention. But I want to focus on this opening personal element a bit.
Last week, after posting about the Harrison Clarity videos, some people at Microsoft called me to get some feedback on other experimental videos. Here are some of the videos they wanted me to watch:
While I was relaying feedback about the videos, I realized how much I enjoyed the human element at the beginning. I like to know who's talking to me, so I can visualize them. It's more personal and friendly to include a talking person at the start. It puts the appeal of the video on an entirely new level.
A while ago I tried incorporating a talking head at the beginning of a screencast, and I found out that Camtasia Studio can't accept a camcorder as an input device. It can only accept a web cam. With web cams, the speed usually isn't good enough to keep the mouth in sync with the words, so it looks poor.
I believe the correct way to integrate a talking head is to record it with a videocamera, export the file as an AVI, and insert the AVI into a Camtasia Studio timeline (or some other application).
Scott Skibell also includes the personal element at the start of his screencasts. Here's a screencast on microphone comparisons that is worth watching.
The only problem with including a human element at the start is that it seems much more difficult to pull off. First, you have to make sure you look all right for the camera. You then have to set up the video camera, lighting, and environment. You have to figure out how to connect a microphone to your video camera (I'm guessing an expensive lapel mic? not sure). You either have to memorize your lines or get a good feel for what you plan to say. Reading it from a screenreader isn't going to work, I'm guessing (here's an example where it seems like the person is reading from a screen). You then have to extract the footage, convert it to another format and integrate into your software program. When you add all of this extra time in, I'm betting that the number of screencasts you actually record plummets.
However, in a web environment, maybe this desire for professional cinema is overkill. Maybe the "reality" camera that catches you unshaven, in a messy house, with wandering kids, ringing phones, mediocre lighting, and an unrehearsed script -- but in a real situation -- is even more appealing because it is transparent and authentic. I need to try it.
About Tom Johnson
I'm an API technical writer based in the Seattle area. On this blog, I write about topics related to technical writing and communication — such as software documentation, API documentation, AI, information architecture, content strategy, writing processes, plain language, tech comm careers, and more. Check out my API documentation course if you're looking for more info about documenting APIs. Or see my posts on AI and AI course section for more on the latest in AI and tech comm.
If you're a technical writer and want to keep on top of the latest trends in the tech comm, be sure to subscribe to email updates below. You can also learn more about me or contact me. Finally, note that the opinions I express on my blog are my own points of view, not that of my employer.