betacup learnings – curation at scale

Simplicity did win, but we’re just starting to understand the complexity of the process to choose the best ideas.

In particular, in the coming weeks, we’re going to dig into two topics in a lot more detail:

+ choosing the right tasks

+ choosing the right people (for the tasks)

Choosing the right tasks

Much of what we do, focuses on deciding what to ask of participants. In this case we focused around 1 – 9 – 90.

We realized a small number of people might submit ideas, while a larger number would likely comment, share and rate the ideas. However, the majority of people might view the ideas. Looking at the quality of ideas, participation as well as media coverage and conversation and Twitter and Facebook, this seemed to work quite well.

However, we also made some assumptions about curation, or more specifically:

+ how feedback might happen

+ how ideas would be selected

The 15 jurors had a difficult time getting through 430 ideas, that were on average updated 3 times each. But to further complicate matters, there were more than 5,000 comments, many very detailed and involving specialized discussions from materials to legal.

So one key question for us is: how can this process can scale without making massive demands on a jury or other people charged with curation?

Which leads us to the second area of interest.

Choosing the right people

Our primary concern. when we started was that finding people to submit ideas. We new there was interest from some initial testing of the idea, but we worked with Jovoto and Core77 to get the word out in their respective communities. And then we reached out to a variety of additional networks and media outlets. And this seemed to work quite well since most of the winners were professional designers or masters level design students.

However as we discussed above, the hardest tasks turned out to be the feedback and selection tasks. In particular, we were curious about the differences between the jury and the community selection since we had both – there was some overlap between the shortlisted ideas and the community top ranked ideas.

As we explore where the differences come from – its clear that since we invited people to share their ideas (and encourage others to vote for them) there are some “popularity” effects. This is a common “crowdsourcing” issue – if the crowd is not aligned in terms of how “the best” is defined, the results quickly devolve into a popularity contest.

However, as we gather data about the quality of ratings, feedback and ideas in the form of karma, we can begin to select people with higher karma to participate in the rating process (interestingly this is being proposed by the community, but thats a story for another post).

So rather than a jury of 15, we might have had a jury over over 1,000 people. And this is how we might be able to scale the curation process for large numbers of complex idea submissions.

To be continued.

One Comment

  1. The relationship between the short-listed ideas and the community generated list is the most interesting dynamic for me. Part of me thinks it’s a problem to let the crowd keep a juror from evaluating something (it feels irresponsible), another part of me tends to think that’s *exactly* why you use crowds.

    I also find it interesting how communities band together when they collectively talk about a contest (threadless vs. jovoto regulars vs. core77). Would love to have you discuss this as the post continues.

Comments are now closed for this article.