Adium 1.0.3 beta now available
Sunday, April 22nd, 2007It’s that time again. Happy testing!
It’s that time again. Happy testing!
As discussed previously, Adium is participating in this year’s Google Summer of Code.
Google gave us six slots, just like last year. We’ve made our choices; here they are:
On Jabber: Yes, we had a similar project last year, but that was using Smack (a separate Jabber library), which we had to drop because of the Java-Cocoa bridge going away. Andreas is back to do it again in libpurple.
Also, we’re not going to have a separate student blog this year. Students’ blog posts will be here, on the main blog.
Since the Adium 1.0 release, I’ve made screencasts of various Adium features, and I will be creating even more to cover progressively more-advanced material. Here’s how I make them.
This includes both system defaults and Adium defaults. I keep as few things non-default as possible. This means a minimalist Dock, nothing on the desktop, and usually a plain solid-color desktop picture (the default is an abstract drawing called “Aqua blue”, but I figure a solid color will compress better). For Adium, though, I use a different desktop picture—the cow desktop picture that we used for the Adium screenshots.
I try to do the whole thing in my head, winging a script as I go, which I will forget immediately. Be sure to pause when an OK button or an insertion point is on the screen, so that if necessary, you can cut it out and repeat it to extend time. (You can tell in episode 3 one place where I forgot to do this; the insertion point doesn’t blink out again for several seconds.)
The setup is 800×600, Animation, max quality, 30 fps, generally with no audio (though I’ll use system audio on Episode 6, Events). You could theoretically use 640×480, but that’s too small for most OS X applications to be usable. 800×600 is the minimum.
The Easy Setup that I use is 720p HDV Apple Intermediate. FCE doesn’t allow you to set the movie’s dimensions to a specific width and height, so the best you can do is overshoot by the least number of pixels possible.
Immediately after adding the video track, I move the playhead to somewhere within the clip, then switch to the Canvas and drag the video frame until it’s in the top-left (which requires the Canvas to be set to Image+Wireframe). It helps to open the clip in the Viewer and drag the clip in the Canvas until the black gaps have disappeared from the top and left of the Viewer. The other way to know that you’ve got it in the right place is to look for the blue frame around the clip in the Canvas to be down to 1 px on the top and left sides. Either way, using a screen magnifier like the built-in CloseView (in the Universal Access prefs—now called simply “zoom” since it came back in Jaguar) or Pixie or DigitalColor Meter can help.
Next I start adding markers at key points in the video. These include clicks on UI elements, the start and end of typing runs, and sometimes the start of a mouse move. I use these markers to sync up the video to the narration. I also cut out insertion-point cycles and OK-button pulses in the Timeline, and repeat these as necessary later.
No word on how I handle audio tracks in the iShowU recording—I haven’t had to deal with that yet. It’ll come in Episode 6.
I used to do this in TextWrangler. But plain text has its limitations, so I switched to RTF in TextEdit in episode 4. This lets me use bold in the script (I generally avoid italic, as it reduces readability, and I need to read it quickly).
I use Luxi Sans 14 as the font. This is a big and readable font, making it good for reading the script off the screen.
It’s a free utility that does one thing and does it well.
The trick for me is speaking up so that my microphone can pick it up. I’m very soft-spoken, so if I speak normally, it sounds like I’m talking through clenched teeth. You may notice this at times anyway. I’m really not speaking through clenched teeth; I’m speaking normally. I feel like I’m almost shouting when I speak in a manner that sounds normal when I play it back.
On the hardware side, I use a MXL-990 microphone with a Behringer MIC200 hardware preamp. (Here’s a photo of my preamp configuration.) The preamp’s output runs to the built-in line-in jack on my Mac Pro.
Audio Recorder saves the audio as an AIFF file. For long-term storage, I use to convert the AIFF file to Apple Lossless in a QuickTime movie, in order to save space. (More on QTAmateur later.)
This means using the razor blade to cut each track as necessary, then taking a freeze-frame from just before the cut, sizing it correctly, and overwriting it into the gap.
My technique for doing this is:
If anybody knows an easier way, I’d welcome it.
I did this for Episode 3, making the overlays in Lineform and adding them to the movie in a second video track.
The shade overlays (which highlight some element on the screen through a hole in the shade) are done rather simply:
The arrows overlay that I use to highlight the incoming- and outgoing-message header colors is done basically the same way, except that drawing an arrow is different from drawing a rectangle with a hole in it, and I left the clip at 100% opacity rather than bringing it down to 30%.
The standard export will use Apple Intermediate (which is bad) and totally uncompressed audio. Instead, I use the Export→QuickTime Conversion command to convert to Animation+Apple Lossless.
This is the first export of two, so I want everything lossless. For the video, I use the Animation codec at 24-bit (Millions, no +), automatic keyframe rate, Best quality; for the audio, I use the Apple Lossless codec at 44.1 kHz, mono, Best rendering quality. The sample rate of 44.1 kHz is the same sample rate I use for the recording (you can set that in Audio MIDI Setup, which is in /Applications/Utilities).
This is where you get rid of all that black space to the lower-right of the intended frame. The command is:
qtsetclip 0 0 800 600 Foo.mov Foo-clipped.mov
qtsetclip will generate Foo-clipped.mov, which is a reference movie that points to Foo.mov for its media. For this reason, do not delete Foo.mov nor overwrite it with Foo-clipped.mov, because Foo-clipped.mov will not work without Foo.mov. Seriously.
QTAmateur is basically the old MoviePlayer, reimplemented for Mac OS X using QTKit. This makes it really handy for converting videos from one format to another—in this case, from Animation+Lossless to H.264+AAC.
This is export #2—the final render. The video format, more specifically, is H.264 Medium+frame reordering+automatic keyframe rate. The audio format is AAC (ahem, “MPEG-4 Audio”), 22.05 kHz (yes, not 44.1—I mix it down here, since I don’t really need CD-quality audio for my own voice), mono, best quality (quality is under Options). I’ve not experimented with VBR.
I also turn on “Prepare for Internet Streaming”, using “Fast Start—Compressed Header”. This lets QT start playing the movie when it predicts that it will not run out of movie data before the movie ends. I don’t do this in the export from FCE simply because I don’t need it, though I don’t think it would hurt anything if I did.
The filename of each movie is “Adium1.0-EpNUM–BRIEFSUMMARY.mov”. For example, episode 3 is “Adium1.0-Ep3-Chatting.mov”.
Revver lets you present the movie as Flash or as QT. Obviously, since our audience is almost entirely Mac users, we’re using QT on the Adium screencasts page—no need for Flash.
The process totals many hours. The recording is actually quite short: even the uncut videos are rarely longer than a minute, though the voiceovers take longer. (Remember what I said earlier about extending time on the video track. You will need to.)
The longest part is the editing. It’s hours and hours of tedious work, making sure all the video and audio syncs up correctly. It’s worth it in the end, though—I’m proud of the five finished products so far.
If you have any questions or refinements, feel free to post them in the comments.
There is a possible AIM/ICQ connection problem. We’re getting reports on irc about it, and some of the Miranda folks are as well. I’m also seeing sporadic reports of problems from Pidgin as well.
We’ve already received about 10 emails regarding password problems on aim/icq.
To be perfectly clear, this is something we have no control over. This is a problem with the servers that AOL maintains for the AIM and ICQ protocols. It would be like telling a crab boat captain on the Bering Sea to control the waters. This is not something we can help with, unfortunately. It’s just something we can watch.
Check out pidgin.im, the new home of the instant-messenger-formerly-known-as-Gaim. The Pidgin team explains there the reason a name change was needed.
Gaim is henceforth “Pidgin” — a really awesome name for an instant messaging client. (If you aren’t familiar with the term ‘pidgin’ from linguistics, ). Note also that our ideological ties to the project (already significant, given that the core library is the foundation of our messaging connectivity and that we share open source philosophy) have strengthened as a result of name change… Adium’s a duck, Gaim’s a Pigeon (logo still pending, but it’ll be iconic, I promise), and the text-based client based on libgaim, formerly “Gaim-Text,” is now “Finch”. I for one welcome our avian overlords 🙂
Through the name change, ‘libgaim’ has also gained life of its own. The library which previously we were hacking together out of parts of the Gaim source is now an independent, recognized entity (as of Gaim 2.0.0b6, actually). Accordingly, it has its own name now: libpurple. (This is a play off ‘prpl’, which is what a libpurple instant messaging service, or protocol, plugin is called).
Yesterday, Google released Google Desktop for Mac. I’m happy to report it should import and search your Adium chat transcripts right out of the box. Google Desktop can search any content that’s Spotlight-enabled, and we’ve had our logs searchable by Spotlight since Adium 1.0.
Thanks to the Google Mac team for the shoutout, and congratulations on your release!