Kickstarter analysis

DivNull Productions backs a lot of Kickstarter projects. Curious about return on investment and so on, I ran the numbers on all projects DivNull backed with a “funding complete” date prior to January 1, 2012. Thus, all projects in the data set have had at least two months to deliver goals (most have had much longer). I wanted to see how much DivNull spent, what types of projects were funded, how much money went unclaimed (i.e. projects which failed to reach their funding goal) and how many projects actually delivered. Over this period, DivNull offered funding to 113 projects, 42 of which failed to reach their funding goal. This is all summarized in the following chart.

Kickstarter results through 2011

The first take away from this graph is that the number of projects not yet delivering is higher than I would have guessed. At present, 34 of the projects DivNull funded have not delivered. The monetary value obscures this a little, as around $1500 of the total can be attributed to just four of these projects. This does, however, represent just under 48% of the funded projects, which looks grim; however, the vast majority of these projects seem to be still working towards delivery.

Another surprise is that almost half of the cash offered up didn’t get used. After funding this many projects, you get a bit of a feel for which will succeed and which won’t. Initially, DivNull was fairly promiscuous about funding, taking a “give them a chance” approach, particularly when it came to role-playing games. That trailed off slightly as time went on.

As someone who gets irritated by lots of useless project updates, I tracked how many updates were made by each project. On a whim, I created a graph of the number of updates vs. the percentage of the target funds raised by the project:

Number of updates vs. Percentage of target funds

I’m not sure this reveals much of anything, except perhaps that there is no clear correlation between number of updates and funding success. And, perhaps, that overly successful projects clearly don’t update more often than other projects. I made an attempt for a while to categorize the type of comments made and count them, but ran out of interest so my dataset is incomplete. The categories may be of interest, though, especially in that they can be divided into two groups based on category: comments I care about and comments I don’t care about. All comments fell into one of these categories:

  1. “Spread the word”: requests for backers to shill for the project.
  2. “Project is progressing”: essentially a progress report without offering any new content.
  3. “Nothing is happening”: a non-progress report, usually to reassure backers that the project is still alive, even though it’s not being worked on.
  4. “Success/thanks”: Nearly every project has at least one “we did it” post.
  5. “Explaining delay or problem”: Details about why you’re not getting your swag on time.
  6. “Pitch changes/clarifications”: Adding new tiers, bonus goals provided in the event of overfunding, etc.
  7. “Pitch of related Kickstarter”: If you liked this project, you might like…
  8. “Art/content preview”: Preview text or artwork, all by itself. Sometimes art is included with other sorts of updates.
  9. “Reference to external content”: Links to a forum or blog supporting the project. Sometimes links to freebie stuff supporting the product, such as character sheets.
  10. “Incremental version”: Download to beta drafts, etc.
  11. “Request for information”: Project needs to give them information.
  12. “Release details”: Links to PDFs, details of product shipping, etc.

This list is sorted in order of least important to me to most important to me. Generally, I really only must see the last two.

One page dungeon contest entry, 2011

Escape From the Lost Laboratories (pdf link) is my entry into the One Page Dungeon Contest 2011. It is, perhaps, a bit more free form in its room descriptions than some one-page dungeons. Since the rules specify that the dungeon needs to be system agnostic, I tried to give just enough detail that readers would think “Ooo… I bet this is how you’d represent that in [insert game of choice]”, but not so much that two different people would do it the same way.

When reading through it, it should be pretty obvious that the experience your players have will be highly dependent on their access to teleportation magic. The title assumes they don’t have any, so that their only choice at escape is to work through the dungeon. If, however, they can teleport on their own, the focus of the whole adventure changes pretty drastically, shifting to more about exploration and investigation of what the place is and how the players might use it for themselves. Some parties might get sucked into the network, take a quick peek, teleport out, and never think about the place again. Others might do serious exploring and be more interested in finding out about its builders, and so on. (If tamed, it would make a pretty great “home base”, for example.)

I am open to feedback on this dungeon, so post it if you have any. Though I have already submitted this entry, the contest allows resubmission with updated versions, so I have until the end of March. Also, feel free to post how you’d flesh out the rooms for a specific system. If anyone is really clamoring for it, maybe I’ll build a Pathfinder version once the contest is over, with MapTool maps and such.

At a more “meta” level, this dungeon makes explicit the notion that all dungeon maps are really just directed graphs. The graph is plain to see in my entry:

One of the reasons dungeons remain popular is that their directed graphs not only represent the geography of the dungeon, but also the flow of the narrative. The dungeon is a tool to control the pacing and sequence of the story and place payouts such that reaching them requires certain challenges. You can see some explicit examples of this in a thread on this topic at Story Games. This thread has a number of opinions on what traits these graphs need to make dungeons “fun”. I also provide side by side comparisons of the maps of some classic dungeons (Tomb of Horrors, Keep on the Borderlands, White Plume Mountain) with their directed graph representations. For example, here is the graph of White Plume Mountain, which clearly shows its “three silo” design:

White Plume Mountain directed graph White Plume Mountain map

As a quick example of how graph analysis can be used, take a look at room 2. The graph makes in painfully obvious that the adventurers will be moving in and out of that room multiple times. So, it would be good to make that room memorable somehow or, perhaps, contain some kind of trap that needs to be dealt with each time through the room. The revised version of White Plume delivers here (see PDF at link above), with a challenge that is similar general each time through, but still different enough in the specifics that it doesn’t get annoying.

One thing I don’t spell out in that thread is exactly how I built these graphs. The key is a program called dot, which is part of the GraphViz package. Once you get that installed, you need to make a .dot file to represent the dungeon. This is just a text file that follows a specific format. These can get complicated, but for turning dungeons into graphs, we only use a really small portion of the format. It starts with a shell definition that defines the file as a directed graph, and contains a single graph attribute for spacing out the nodes:

digraph G {
	ranksep="0.4 equally";
}

Let’s use a smaller example from one of last year’s One Page Dungeon contest winners: map three from Antti Hulkkonen’s “Den of Villainy!” (reproduced to the right through the magic of the Creative Commons license). At each room on a the map, enter one line for each connection the room has. For example, room 1 on the map leads to rooms 2 and 9. So, one line is "1" -> "2" and the other is "1" -> "9". Do that for each room. Then save the file and open it with graphviz (or open a command line and run dot on it). That’s the basics of it. You can also add some embellishments to each line, like using a different color for secret doors and so on. The result might look like this:

digraph G {
	ranksep="0.4 equally";

	"8" -> "3" [dir="both"];
	"1" -> "2" [dir="both"];
	"1" -> "9" [dir="both", color="blue", style="dashed"];
	"2" -> "3" [dir="both"];
	"2" -> "4" [dir="both"];
	"3" -> "5" [dir="both"];
	"3" -> "6" [dir="both"];
	"4" -> "5" [dir="both"];
	"5" -> "13" [dir="both", color="blue", style="dashed"];
	"6" -> "7" [dir="both", color="blue", style="dashed"];
	"9" -> "10" [dir="both"];
	"9" -> "12" [dir="both"];
	"10" -> "11" [dir="both"];
	"11" -> "12" [dir="both"];
	"11" -> "13" [dir="both", color="blue", style="dashed"];
	
	/* Mark the entrance rooms */
	"1" [shape=Mdiamond];
	"8" [shape=Mdiamond];
}

The graphviz package will handle the layout of the nodes automatically. Sometimes it does a better job of this than others (and there are tricks you can use to make the output better), but as these graphs are just to visualize the dungeon, they don’t need to be perfect. In this case, the result looks like this:

The flow of the dungeon becomes more clear in this graph, with the one whole branch only accessible through secret doors, a main interconnected section, and two key secret rooms (which, significantly, are furthest away from the entry ponts). All very rational.

Orphans project completed

Around ten months ago, a project on the Exalted wiki started, aimed towards getting “orphaned” pages connected in to the main navigation pages. This was made much harder than it needed to be by virtue of this particular wiki using some very old software. Even worse, it’s not even using the latest version. Though this software was originally used for Wikipedia, it has been vastly outpaced by its replacement, MediaWiki, which has many features lacking in this old software. In particular, MediaWiki gives you a list of which pages are not linked to by anything else. This old software cannot. So, DivNull produced software to find these “orphaned” pages, which (hopefully) accelerated the process, which finished today.

The only reason to even mention something this trivial is that this code actually used from real computer science, in particular work done with directed graphs. Having already scraped the wiki source text for the whole site (for other reasons), it was fairly simple to build a directed graph of page links, representing which pages linked to others.

Once you have this graph, though, how do you answer questions like “what is the shortest path between this page and that one”? One good thing here is that the wiki has a single main entry page, and the idea was to find a path from that page to every other page in a short number of steps (no more than five steps turned out to be the most useful, given the way the wiki is organized). This simplifies the problem from finding the shortest path between two arbitrary nodes to finding the shortest path between one arbitrary node and one specific one. This still sounds computationally expensive (and it is), but also smells like a problem that someone has probably already found a good answer to.

A solution came in the form of some algorithms in the Perl Graph module, which is chock full of graph-based utility (and can even export to dot files). In particular, it contains an implementation of Dijkstra’s single-source shortest path problem algorithm, which solves this exact problem. Using these routines, generating the orphan list was fairly simple.

Producing diagrams of the wiki graph, however, was not. Though the graph is easily represented in the aforementioned .dot format, there are so many nodes that existing rendering routines choke on them. And, even if they did complete, the result was horrible to gaze upon. DivNull searched for some other approaches and found some interesting approaches to rendering large, sparse graphs.

One idea was to draw represent the page links as a sparse matrix. The University of Florida seems to do a lot of research in this area. Drawing them, however, seems fairly straight forward, so will probably be done as some future point.

Another idea uses a force-based approach specifically for very large graphs. A paper details a system that could easily handle a data set this size, but experimenting with it for the wiki will have to wait.

Naturally, new orphans will surface in no time, but its good for now.