OMEC Retrospective
Six months. Six examples. Six conversations. September closed out the first cycle of Dream Foundry’s Official Media Exploration Club and we learned a lot. I want to take a moment to extend a deep and heartfelt thank you to all of our discussion leaders. Ferrett Steinmetz, Rachel Quinlan, KT Bryski, Darcie Little Badger, SL Huang, and Emma Osborne: you are all, individually and as a whole, fantastic. Thank you for stepping up to be the first.
If you want to visit the insights about craft and the specific works we discussed, you can share in that learning by dropping in on the conversation here. Just because the facilitated part of the conversation is over doesn’t mean it’s disappearing or that the conversation has to stop.
The learning I want to talk about here, though, isn’t a rehashing of the conversation happening in the OMEC but about the OMEC itself. This was the second program offered by Dream Foundry, launching a few months after we started publishing content on our site, and we learned both about program planning and implementation in general, as well as how to run this program in particular. Now that we’ve had a few weeks to evaluate, I want to discuss some of those takeaways, both so people can get some insight into how our programming planning and evaluation works, and so others can benefit from our experience when planning their own endeavors.
From the outset, the OMEC was designed as a program to embody the core premise of the Dream Foundry by bringing creators from different areas of the industry together to a shared conversation where they could learn from each other. It helped that as a program happening online, on our forums, potential expenses for the program were low: the necessary threshold for success the pilot had to hit in order to justify itself was, consequently, modest and attainable. With that, we had two major metrics we planned to use for assessing the program’s success both while it was running and after.
Does it work on its own?
The first of those metrics was the success of the program itself. Did it run well and cause the kinds of interactions and conversations we wanted it to facilitate?
Specifically, we examined:
- Did discussion participants represent the diversity (in role) of the industry?
- Was participation consistent from month to month?
- Were the logistics of the OMEC implementation (e.g. recruiting, onboarding, and paying discussion leaders) smooth, functional, and replicable?
The third criterion was the one that was most dynamic over the course of the cycle. Payment was smooth from the outset because that procedure followed existing procedures we’d established for content management. Onboarding got better as the cycle went on. The first couple of instructors, after agreeing to join and choosing a work, were basically told: “We’re figuring this out. Do what seems like a good idea and we’ll see what happens.” (Those early discussion leaders, especially Ferrett Steinmetz, deserve an extra dose of gratitude for stepping up under those circumstances.) Once we had an idea of what seemed to work and what didn’t, we developed onboarding documentation, which was a huge step in the right direction. It wasn’t a fix-all, though, which leads us to the biggest takeaway for this criterion: recruiting for discussion leaders needs to be completed >before choosing a theme and starting the cycle. Not every theme can be well supported in every medium. “Found family” was a great theme for a lot of categories but didn’t work well for illustration or games. We made it work (and the games segment of this cycle was the one I personally found most enlightening), but we brought unnecessary complications by choosing a theme without involving everyone who’d need to work with it in the conversation.
The first and second criteria (diversity of role and consistency of participation) couldn’t be meaningfully evaluated while we were in progress, so we’ve examined them after the cycle ended in September. They were both more miss than hit. OMEC participation reflected the prose-writer-heavy demographics currently present throughout the organization. That makes sense, but it is notable that despite being a program very much designed to bring in and offer value to people from a variety of backgrounds, there’s no evidence the OMEC attracted participation that was more diverse than the organization as a whole or generated integration across roles. Similarly, while different months had participation from different people, the participants who were consistent across months *cough* work for us.
As a result, in planning for the next cycle, we’re specifically looking to:
- Get commitments for discussion leaders for each month ahead of time and have their input involved in theme selection.
- Include the OMEC in outreach efforts planned for 2020 to address the current overrepresentation of traditional prose writers in program participation. (We’re not trying to get rid of any of you, prose writers. We’re glad you’re here! But having the rest of our industry hanging out here is good for us, too.)
- Increase participation stickiness from month to month.
With the pilot cycle as a baseline, we’ll have a clear means of measuring the effectiveness of the changes we make.
Does it work for the organization?
The other metric for judging the effectiveness of the OMEC was whether it worked for the organization. We have a track record now and more organizational maturity than we had when we first launched the OMEC. Despite that, we are still very new and while we’re rich in many things, we don’t have the financial resources to be careless, or even cavalier, about what we fund.
In terms of mission and project goals, the OMEC is and remains perfectly aligned. It ties very clearly into our core principles of “inclusivity,” “mentorship,” and “networking.” “Relevance” is the fourth principle, and while the OMEC doesn’t intrinsically tie into it, by choosing discussion leaders who are active in their fields now and works that are pertinent to the industry, we slip that one in, too.
But one of our organizational needs, across all our programming, is “outreach and growth.” The content builds engagement with the site, develops an archive of resources, and keeps us consistently present and visible. The contest is a giant road sign pointing people to us and drawing them in. (The numbers on that will need more time to be properly crunched, but the preliminary ones are quite good.) Does the OMEC do that?
There are two ways to measure this. The first is in terms of discussion participants, and as discussed above, that’s an area where growth and improvement will be a focus for the next cycle. The other gauge, though, is in terms of organizational reach and recognition. This is, arguably, the place where the pilot of the OMEC demonstrated the strongest success. Engagement on social media sites, especially when current discussion leaders amplified our outreach in those spaces, got a demonstrable boost around the OMEC. This provides some evidence that the core concept behind the OMEC is attractive and appealing and our efforts for improvement should be focused on converting that engagement on social media to participation in the club itself.
What’s that all mean?
There’s a danger here of reading the preceding and seeing a lot of negativity. Yes, there’s “needs improvement” stamped all over that report, but that’s not bad. This was a pilot. If we’d walked away from it going, “Perfect. Let’s do exactly that over again,” we’d be missing opportunities to improve. This pilot cycle could have led to shuttering the program without another run. If the logistical overhead in running it had exceeded the organization’s capacity to support it, we’d have either done a major redesign or nixed it. Similarly, if we’d seen evidence that the OMEC was functioning as a deterrent for outreach or engagement, this would have been its only cycle. What we saw instead was evidence that the core concept works as intended, the program runs well with the resources we can allot to it, and that it has some intrinsic ability to foster outreach and engagement.
We’ll make the changes and adjustments necessary to apply the lessons learned from this cycle and improve the areas that need it. We have two more six-month cycles planned for 2020, and we’ll run those with the same measured, evaluative approach we used in the pilot. Then we’ll do a hard assessment of the program to decide whether, with those improvements and any others we make as we see the effects of the changes, it makes sense for us to keep running it.
That, in a nutshell, is how we plan and assess our programs. Thoughts or questions? Feel free to share, either in the thread for discussion of this article on the forums, or by dropping a line to leaders@dreamfoundry.org.

Jessica Eanes
Jessica Eanes, also known as Anaea Lay, lives in Chicago, Illinois, where she engages in a numinous love affair with the city. She’s the fiction podcast editor for Strange Horizons, and has had her short fiction published in a variety of venues including Lightspeed, Apex, Beneath Ceaseless Skies, and Pod Castle. Her CYOA interactive game, "Gilded Rails" was released by Choice of Games in 2018. It features a demonic cat, an implausibly efficient accountant, and far too many potential romantic interests. For fun she reads, cooks, eats, plays board games, interrogates people about the logistics of their chosen field, and forms intricate business plans over brunch.