I listen to too many podcasts

I wasn’t getting through episodes fast enough to even keep up with new releases, let alone approach the present.

At some point I became one of those people who listens to podcasts on 2x-speed, because I’d subscribed to so many — and in many cases they had long backlogs I wanted to catch up on — that I wasn’t getting through episodes at normal speed fast enough to even keep up with the new releases, let alone asymptotically approach the present.

We’re in something of a Podcast Era now, and have been since the early 2010s. Long ago, before I even started my first blog (remember blogs?), there was a First Podcast Era, which began shortly before Ben Hammersley named them “podcasts” in 2004. (Before that, of course, there had been “internet radio” — mp3 streams, which we called shoutcasts after the popular server software produced by Nullsoft; listened to at 96 or even a luxurious 128kbps in Nullsoft’s ubiquitous Winamp media player; and which, as I recall, consisted mostly of European ambient and techno — but you had to be at your desktop computer for those, because in 1998 the iPod was still three years away.) Back in the First Podcast Era, of course there was no Spotify to offer far-right bigots a hundred million dollars, so you had to have a podcatcher app, or eventually iTunes, and figure out how to copy and paste an RSS URL into it. And there was no Patreon to funnel millions a year to “far-left” bigots, and as I recall not much in the way of podcast networks or available sponsorships, so most podcasts had what we would now consider fairly amateurish production, and were strictly side gigs.

Anyway, I used to listen to a lot of podcasts in 2005–2007 or thereabouts, but I sort of fell out of the habit, and by the time the Second Podcast Era got started in earnest around 2012–2013, I didn’t consider myself “a podcast guy”. But eventually I had friends who were doing podcasts, and I wanted to support my friends, and one thing led to another, so here I am with a paid Pocket Casts account and, uh…76 subscribed feeds.

Some of them have been limited runs, or have just ended, the way things sometimes do, without having had a planned ending; or work on a seasonal schedule and are between seasons currently; or are just on some kind of hiatus; so only about two-thirds of those are still releasing new episodes with any kind of frequency, but it’s still a lot to keep up with.

All this woolgathering was by way of establishing why I’m only just getting to the fourth season, “Twilight Mirage”, of Friends at the Table, an “actual play” (i.e. episodes are recordings of gameplay sessions of what we used to call a “pencil-and-paper role-playing game”) podcast that I believe is one of the best of the genre. FatT alternates (roughly) fantasy with (roughly) science fiction, so Twilight Mirage is the second sci-fi season, and the last episode I listened to was the post-mortem Q&A for “Winter in Hieron” and its prequel “Marielda,” which formed the second fantasy season.

The, I suppose, impresario (and also gamemaster) of Friends at the Table is Austin Walker, a critic and author, the former EIC of Vice‘s former Waypoint games vertical (now reduced to “Vice Gaming” because corporate decided there was too much individuality, though the podcast — remember podcasts? — Waypoint Radio lives on with Austin as host). Austin is a genuinely brilliant person, and Disney even let him write some Star Wars stories, and his talents as a GM are matched only by the FatT cast, and in particular Jack de Quidt’s stunning work composing the scores for each season. So when, around the midpoint of the Marielda/Winter postmortem, I heard Austin describe the season 2 (“COUNTER/Weight”) episode “An Animal Out of Context” as “the best thing I’ve ever made”, I decided, well, I remember that being great, but I should go listen to it again.

I don’t know if that episode, which intersperses small vignettes with the other main characters among longer stretches of Jack and Austin playing a GM-less, two-player storytelling game of their own design, recontextualized for the COUNTER/Weight setting, would have the same impact for someone who hadn’t followed the story up to that point, so I hesitate to recommend listening to just that episode alone. But there’s a moment, three-quarters or a little more of the way through, where in one of those side vignettes, Art Martinez-Tebbel mentions casually that “an animal out of context” (in a future zoo, as it happened) is a hard thing to understand — giving, presumably unknowingly in the moment, the episode its title and also perfectly summarizing the alienation Jack’s character feels. I had to pause and take a breath when I heard that again, because it crystallizes a lot of what I think is so great about FatT: that Austin and Jack were telling such a powerful story; that Art, separately, got at such a crucial idea in a different (as it were) context; that Austin and Alicia Acampora, the producer and also a cast member, caught that brief phrase and realized how well it evoked the episode’s themes.

This is a very rambly weekend post, but the short of it is, I’m glad that, in this Second Podcast Era, it can be feasible for a show like Friends at the Table to run for over six years, and if you think a collaborative longform fiction radio show “focused on critical worldbuilding, smart characterization, and fun interaction between good friends” sounds interesting, you should try giving it a listen.

Sponsored Post Learn from the experts: Create a successful blog with our brand new courseThe WordPress.com Blog

Are you new to blogging, and do you want step-by-step guidance on how to publish and grow your blog? Learn more about our new Blogging for Beginners course and get 50% off through December 10th.

WordPress.com is excited to announce our newest offering: a course just for beginning bloggers where you’ll learn everything you need to know about blogging from the most trusted experts in the industry. We have helped millions of blogs get up and running, we know what works, and we want you to to know everything we know. This course provides all the fundamental skills and inspiration you need to get your blog started, an interactive community forum, and content updated annually.

New project: Cproj

Next I’ll start a real project. Or maybe I’ll just discover another bit of tooling that needs revision.

A couple of weeks ago, when I got started on what became Scuttle, I mentioned that the reason I wanted a simple but adequately featureful plain-C unit testing framework was that I was frustrated with the limitations of the more ad-hoc solution I’d built into my old cproj() shell script. It creates an even simpler test facility, but it was too simple.

Now that Scuttle v1.0.0 is done and published, I want to revisit Cproj. I haven’t touched this script in years, and I think there’s a lot of room to clean it up and make it more useful — and update it to use Scuttle as the testing framework it builds into the project skeletons it generates!

Much like Scuttle, the animating philosophy of Cproj is that it should be as self-contained as possible: the script contains its templates for the files it generates as here-documents, and runs pattern substitutions on them to produce the right output. The first time it’s run, the old Cproj actually writes out those templates to /etc/skel/proj/ (if run as root) or to $HOME/.skel/proj/, and on subsequent uses reads them from disk instead. I’m no longer sure there’s much value in doing that, so I’ll probably remove that functionality, and just use the here-docs directly every time, like Scuttle does.

After I finish fixing up Cproj, I should be ready to use my revised tooling to start a, you know, real project. Or maybe I’ll just discover another bit of tooling that needs revision.

Scuttle: v1.0.0 is live!

All right, it took me two weeks, but Scuttle v1.0.0 is now live. I think it’s pretty easy to use, and it’s very lightweight. The full README is over at github, of course, but it basically works as I outlined in the project introduction. No one really needed a new unit test framework, but I wrote one anyway, and now I’m going to use it in subsequent projects as often as I can.

I hope other people also find it useful!

Happy Third Lastjediversary

Three years ago today, December 15th 2017, the best Star Wars movie to date — by the director who, to date, showed the best understanding of what Star Wars is about and what it’s for — hit theaters. So this is just a quick appreciation post for Rian Johnson and The Last Jedi, which was so much better than it needed to be, and whose most important themes were, as far as I can tell, immediately and thoughtlessly cast aside as soon as J.J. Abrams came back to the franchise.

I should do a whole-series rewatch soon.

Scuttle: pre-update update

Scuttle is now working as intended for the standard use case. As described in the introductory post, for a C project with a standard layout, adding unit testing with Scuttle requires the following steps:

  1. Install scuttle.h either locally to the project or in the system include path
  2. Install scuttle.sh either locally to the project or somewhere on your $PATH
  3. Add test suite source files under test/, each corresponding to a module of your project, named test_<module>.c, using Scuttle’s simple macros
  4. Add a test target to your main Makefile as follows:
test:
    bash scuttle.sh test
    $(MAKE) -C test

Scuttle will generate test/Makefile, test/test_<module>.h and test/test_<module>_gen.h for each test/test_<module>.c suite, and the test/test_<project>.c harness; and the generated Makefile’s test target will build and run the test harness and pipe the output to test/log/test_<project>.log.

There’s a lot of room to make this more flexible and add some convenience features, but in terms of functionality it’s at MVP level now.

Tomorrow I’ll do a quick cleanup pass, make a few small tweaks I’ve already thought of, write some basic documentation, and actually push the code up to a public repo so if anyone’s interested in trying it out, it’ll be available.

A Brief History of C

Over at Ars, Richard Jensen has a great article on how the C programming language came to be. I love dives like this into the various particulars and contingencies involved in the development of things we now think of as more-or-less having “always been there”. It’s easier to do with computer technology than many other things, too, because a lot of the people who were directly involved are still alive, and can talk about why they made one decision or another.

It’s a good reminder, too, that very little of how we generally think “the world is” was inevitable—or is immutable. On the other hand, of course, the longer particular arrangements are accepted as the default without examination of their roots, the more inertia they have, and the more effort it takes to make change.

Just think: we might have ended up with 10-, 12-, even 18-bit “bytes” as the basis for our computing technology; or even trinary logic circuitry, with a three-state “trit” as the smallest unit (after all, at the circuit level, a “0” just means there’s currently 0 volts on the line, and a “1” means there’s 5 volts, but there’s no rule that says other signal levels couldn’t exist).

New project: Scuttle

A basic principle of engineering, software or otherwise, is “don’t reinvent the wheel.” I’m gonna anyhow.

Last week I wrote about wanting a very lightweight C unit testing solution, and, in my bullheaded way, wanting to write it myself rather than learn an already-existing system. It’s a commonplace in software engineering that the impulse to start over from scratch is one of the worst habits of programmers; one way or another, reinventing the wheel means doing lots of work you didn’t have to, usually to get a less satisfactory result. Either you end up hand-rolling something you should have just used a library for; or you look at a mass of legacy code full of cryptic comments and weird edge-case handling, think “whew, what a mess, how hard to maintain! better to just scrap it all and start fresh, so it will be cleaner and more elegant,” only to discover along the way that all those weird edge cases really exist, and end up with just as patchwork a code base as the one you meant to improve.

All that said, I’m doing it anyway. It’s not like I’m busy right now, and this is what’s engaging my brain.

So, suppose you have a project projname, and you want to add unit testing. Your project layout looks like this:

~/projname$ ls
include/
Makefile
src/
~/projname$ ls include
include/foo.h
~/projname$ ls src
src/foo.c
src/main.c

You want to add unit tests for the foo module, which is sensible because it’s pretty complex and you want to be sure all of it works exactly right. The foo module looks like this:

foo.h

#ifndef _FOO_H
#define _FOO_H

int foo();

#endif /* _FOO_H */

foo.c

#include "foo.h"

int foo()
{
    return 42;
}

To add unit testing with Scuttle (let’s say it stands for “simple C unit testing tool, limited edition”), you just need to put scuttle.h in your include path and scuttle.sh in your executable path, create a test/ directory, and add test_projname_foo.c:

#include "foo.h"
#include "test_projname_foo.h"
#include "scuttle.h"
#include <stdio.h>

SSUITE_INIT(foo)
    printf("foo suite init\n");
SSUITE_READY

STEST_SETUP
    printf("foo test setup\n");
STEST_SETUP_END

STEST_TEARDOWN
    printf("foo test teardown\n");
STEST_TEARDOWN_END

STEST_START(foo_return_true)
    int i = foo();
    SASSERT_EQ(42, i)
STEST_END

STEST_START(foo_return_false)
    int i = foo();
    SREFUTE(i == 69)
STEST_END

You’ll have noticed that no test_projname_foo.h header exists: scuttle.sh will generate that, as well as a test/test_projname_foo_gen.c source file with some data structures, and test/Makefile, for you.

~/projname$ ls test/
test/test_projname_foo.c
~/projname$ scuttle.sh
This is Scuttle, v1.0.0.
Working...
    * found suite test/test_projname_foo.c
    * generated suite header test/test_projname_foo.h
    * generated suite data test/test_projname_foo_gen.c
    * generated harness test/test_projname.c
    * generated makefile test/Makefile
Done.
Type 'make -C test/ test' to build and run your test harness.
~/projname$ ls test/
test/bin/
test/log/
test/Makefile
test/obj/
test/test_projname.c
test/test_projname_foo.c
test/test_projname_foo.h
test/test_projname_foo_gen.c
~/projname$ make -C test/ test
[$(CC) output]
test/test_projname > test/log/test_projname.log
~/projname$ cat test/log/test_projname.log
This is Scuttle, v1.0.0.
Running test harness for: projname

Test suite projname_foo:
 *** Suite passed: 2 / 2 tests passed.
 *** 1 / 1 suites passed

Aside from the simple SASSERT(x) and SREFUTE(x), Scuttle provides convenience macros SASSERT_NULL(x), SASSERT_EQ(x,y), and SASSERT_STREQ(x,y), as well as SREFUTE versions of the same.

As of this writing, scuttle.h is at or near v1.0 completeness, targets for scuttle.sh‘s output are determined, and I’m beginning work on scuttle.sh itself. More on those soon.

Testing Tuesday

At this point you are no doubt thinking, “why on earth aren’t you just using one of the many existing frameworks, which would solve these problems for you?” Excellent question! Now, moving on.

I have a bad habit of getting an idea for a software project, starting to write code without much planning ahead, very quickly discovering something I hadn’t considered or some tool I’m lacking, deciding to pivot to writing a separate utility or library to address that problem so that I can come back to the original thing later, and starting to write code for the new project without much planning ahead.

You can see where this is going, and it’s not Finishedprojectsville.

Over the years, though, I have managed to accumulate some tools to help counteract these tendencies. When I’m working on hobby projects, I usually like to work in plain C, with as little scaffolding as I can get away with — Vim will do nicely, thank you — but even so it turns out there’s some boilerplate that it’s tedious to retype every time I start a new project. So I wrote a Bash script that defines a function cproj():

/home/smadin $ cproj foo
/home/smadin $ ls
foo
/home/smadin $ cd foo
/home/smadin/foo $ ls -F
include/ Makefile src/ test/
/home/smadin/foo $ ls include
foo.h
/home/smadin/foo $ ls src
foo.c main.c
/home/smadin/foo $ ls test
foo_test.c Makefile test.h
/home/smadin/foo $ make
mkdir -p obj bin
/usr/bin/gcc -Wall -Werror -Iinclude -c -o obj/main.o src/main.c
/usr/bin/gcc -Wall -Werror -Iinclude -c -o obj/foo.o src/foo.c
make -C test
make[1]: Entering directory '/home/smadi_000/dev/foo/test'
/usr/bin/gcc -g -Wall -Werror -I. -I../include -o foo_test.exe foo_test.c ../obj/foo.o
./foo_test.exe > foo_test.log
make[1]: Leaving directory '/home/smadi_000/dev/foo/test'
/usr/bin/gcc  -o bin/foo.exe obj/main.o obj/foo.o
/home/smadin/foo $ cat test/foo_test.log
foo_test
initializing test data...
...done
test 0 of 1...
test_foo_dummy()...
foo_dummy() returned 42, expected 42
foo_test: 1 tests passed out of 1
/home/smadin/foo $

As you can see, cproj() creates a very basic skeleton for a C project, including a very primitive, hand-rolled unit-testing facility. test.h just defines an array of function pointers and manually populates it with the declared test functions:

#ifndef FOO_TEST_H
#define FOO_TEST_H

void init_test_data();

int test_foo_dummy();

int (*test_array[])() = {
    test_foo_dummy,
};

#define FOO_NUM_TESTS sizeof(test_array)/sizeof(test_array[0])

#endif /* FOO_TEST_H */

And foo_test.h:main() simply iterates over the array, breaking as soon as a test returns false. This has the advantage of being very simple and completely self-contained — the cproj() function contains the complete here-documents into which the project name is interpolated, so the only dependencies are the script itself and GCC — but also the serious disadvantage of being…very simple. There are no assertions, so every test case has to do checks and any logging manually, and return a true or false value accordingly. A single failure aborts the entire test harness, preventing accurate logging of pass/fail rate. There’s no facility for defining separate suites of test cases for related functionality, all managed by the same test harness, for more helpful reporting. And worst of all, it’s completely static: the user has to (that is, I have to) manually enter each test case function name 1) in the source file where I define the test case, 2) in the header where I declare the test case function, and 3) in the header where I define the function pointer array.

At this point you are no doubt thinking, “why on earth aren’t you just using one of the many existing unit-testing frameworks, which would solve these problems for you?” Excellent question! Now, moving on.

What I want, and I suspect what most programmers want, out of a unit test framework is for it to cause as little friction as possible. I want to have to think about the framework as little as possible in order to write tests and have them run and provide me useful output, so that in writing tests I’m spending almost all my mental energy on what I’m testing, how it’s supposed to work, and how it might fail. To remedy the defects of the current cproj approach, what I want is:

  • assertions (true/false, null/not null, equal/not equal, equal/not equal for strings) with some kind of inherent expected/actual logging, so I don’t have to write those sprintf()s in each test function
  • a boilerplate test harness I don’t have to write each time, which can run all the test cases without aborting on the first failure, and report a total pass rate
  • test cases split into suites (with appropriate reporting in the harness) so I don’t have to put everything in one big source file
  • dynamic discovery of tests so I don’t have to copy and paste each function name to three different places
  • minimal extra headers or other files so that integrating testing into a project is as simple as possible

And I think I can get there, or most of the way there, with one header and one shell script: the header to define assertion macros and so forth, and the script to scan source files (test suites) in the test subdirectory, parse out the suite and test case names, and generate the per-suite headers and the harness source file. I’ve been experimenting with this idea, and I’m partway there already.

More on this soon.

Remember blogs?

Blogs were these things we used to have, and the internet was better then. Correlation isn’t causation, but still, one has to wonder.

Blogs were these things we used to have, and the internet was better then. Correlation isn’t causation, but still, one has to wonder. I had a blog once, and sometimes some of my posts were very mildly popular. I helped moderate a much bigger blog, and I participated at some other blog communities. All of that kind of withered, or maybe I just fell away from it, as social media became the dominant mode of internet interaction. But what if it didn’t have to be? What if: blogs, again?

Anyway, I guess it’s worth a shot. I need something more productive to do with my time than constantly being mad because I saw a bad take on Twitter — Twitter is nothing if not an endless source of all the bad takes you could ever get mad about, and then some — and it’s hard to look at the IndieWeb movement and not think, “you know, they might be onto something there.” Individuals controlling their own space and experience on the web, using open protocols and prioritizing accessibility and interoperability over the interests of a for-profit corporation that controls a massive platform? Sounds all right.

I’ve been saying for years that Twitter’s bad for me, and I don’t think I’ll be there too much longer, now. Let’s try blogs again.