Loading summary
A
Up Next, on episode 30 of Stack Overflow, Joel and Jeff sit down with Richard White of UserVoice.com they discuss software bug and feature tracking, Web 2.0 style from IT conversations. Hi, this is Phil Windley. Today I'm excited to bring you another great program from Stack Overflow with Joel Spolsky and Jeff ATWOOD Here on it conversations. The conversations network is a 501c3 nonprofit and we need your help. For a tax deductible donation of as little as $5 per month, you can support this channel and the rest of the Conversations Network. So please visit conversationsnetwork.org to become a member and help us continue to bring our programs to the world for free. Our audio files are delivered by Limelight Networks, the high performance content delivery network for digital media. And now, here's StackOverflow.
B
How do we usually start? We don't do a lot of editing, typically.
C
Cut.
A
What's the usual for those listeners at home to Stack overflow podcast number 30. We have a special guest today, Richard White.
B
Richard White of UserVoice. And the reason we invited him on was because we use his tool, UserVoice, on Stack Overflow. It is our de facto feature bug tracking tool, although there's some controversy about whether it is in fact a bug tracking tool or not. Yeah, I personally like it a lot. I mean, it matches my mental model of what I want to do much closer than a lot of other stuff I've tried. So, Richard, I know we've done this a couple times, but if you could just talk about the genesis of user voice, like, where did it come from? How did it come about all that kind of stuff.
C
Yeah, so I'm kind of a back end guy, turned front end guy. And that process was especially painful when I actually had to listen to a bunch of the users. We were doing a startup called Kiko. It was an online calendar a couple years ago.
A
Keto Kiko. Oh, the calendar thing.
C
Yeah. So I was the lead designer on that. And my day after launch, my day was like spending half my day reading blog comments and message board posts and emails and all these things about what you should do. That's partly a commentary on how much of a tar pit doing calendaring is, but also just kind of a failure feedback system. So we were actually working in the same room as the guys from Reddit who I think were on here last week. And, you know, it's kind of.
A
That's cool.
C
Maybe we can have people vote up the ideas. And I think they even tried that. They had Like a features Reddit and I don't think it quite works.
A
Well, don't things on Reddit, on Reddit, don't things sort of sink after they. People could put something up on Reddit but it would rapidly disappear.
C
There are two fundamental, right. There are two fundamental problems. One is that their algorithm is based on news, so things spike and they fall and they disappear. And the other thing is you just vote up a million things that you want and it's funny. So I think around the same time I was actually reading Joel, I was reading one of your articles talking about giving developers $50 to spend on line items for your next release kind of thing. And so that was actually put those two ideas together and that's really where each voice came from. This concept of let's have people vote up ideas. Kind of like Reddit, but it's more long running discussion. So let things stay on the top longer, but force people to only choose the top three things they're interested in.
A
What's the difference? I mean if they choose everything, then their vote is effectively ineffective. Why not just let them vote for as many things as they want?
C
So like a good example of this is like a competing solution from Salesforce they have on Starbucks and like the number of ideas are like lower your price. I think on the Dell one it was like Linux laptops, it's basically the most popular things will rise to the top rather than the intersection of popular and important to you. So we have like 90% of people that come to UserVoice will give you one idea and the other 10% will spend every single vote we give them. So the goal is to kind of flatten out that. Kind of flatten out kind of that priority a little bit better. So that's kind of what we do with our system. In fact, I'm very envious of what you guys have done on Stack overflow with your reputation system. You know, we're looking forward to hopefully doing a little more with ours to kind of incentivize people kind of moving in the right direction. You know, here's a few votes for this, few votes for that, et cetera, et cetera.
B
Anyways, they could like earn more votes, is that what you're saying?
C
Yeah, yeah. I mean, I know there's probably another discussion you're probably going to berate me about OpenID, but one of our things is we're also trying to just lower the threshold and so we want to use kind of the votes as kind of like some breadcrumbs to lead People along like, you know, here's a few votes, give an idea, give us your email. Here's a few more votes, you know, tell something about you. Here's a few more votes kind of thing or reward people that are, that are really like active contributors and stuff like that. So it's kind of cool. It's a currency kind of influence, if you will. But yeah, so it's really cool to be on here today just because like I said, this is kind of. I think I told Jeff this a while ago. I was really psyched when StackOverflow started using it simply because two guys on the call. I think you're our biggest account, StackOverflow. And Joel, you're one of the main reasons we're even doing this. And I do actually have actually been pulling up some stats on your usage because you do have a very atypical usage of user voice, which I think is actually really cool we can talk about at some point.
A
Before we go into that, I have a question. Didn't we. What did you major in in college?
C
Me?
A
Yeah.
C
Computer science.
B
Oh.
A
Did you take any political science classes or political philosophy?
C
No. No, I didn't.
A
So I was just thinking about, you know, we have. Most modern democracies are not actual democracies, they're representational democracies.
C
Right.
A
And that's not really an optimization. That's not really just a matter of convenience because it's impractical to have direct democracy. Although some people think I live in.
C
California, I'm very well aware of the downfall of democracy at this point.
A
Yeah, exactly. But it's just that direct democracy is sort of. So how does that influence your thinking on user voice? Like it's sort of a non representational thing and you wind up. I mean, if you really created a software product and then literally only implemented the things that people voted for.
C
Right.
A
What does the product look like?
C
There's a couple third rails of user voice. One of them is like things like open it. One of them are things like bug tracking. Is this for bug tracking? And the third one is do. Am I beholden to this? Right. Is this the California ballot propositions? You have to do what they say. We actively say no. It's an input. And the reason I built this because I would spend so much time just trying to put this data together to have a panel of. Here's what our users are saying. And as much as possible, try to try to make it more actually democratized than, you know, here's what the 10% of people that are shouting the loudest are saying, right, right. But no, I surely consider it. Like, we actually have another project called Sometime around and sometimes it's just good as triage. I mean, the nice thing is people know that they can put stuff up there and even if I don't respond to it today, it's safe. You know, you can get to it, you know that other people can. It's not going to like Reddit or the other things fall off the front page. It's kind of there.
A
So it's a pacifier for your complaining users. You can be like, it's all right, we know about that. It's on a list.
B
Well, that's true of all. Wait, wait, wait, that's true of all bug tracking systems. I mean, there's way more bugs than there are time to ever do them, right? So that's just a standard thing that happens.
C
It's clearly, it's a pacifier, both users and people like ourselves running things. Right. It keeps everyone kind of like, I don't know, it's a good record. It's a much better record than message boards or blog comments or emails or whatnot.
B
The other thing I really like about user voice, and again, I have a sort of a weird way of using it, so I'm not saying this is typical, but to me, part of the value proposition of user voice is that it totally blurs the line between features and bugs, which I think was kind of non existent to begin with. I mean, sure, there are things that are clearly bugs, like, oh, I click this and the app crashes, right. I get an error page, that's clearly a bug. But there's very few of those in the big scheme of things. Not that our code is that great, mind you, but there's just not that many things that are just. You can definitively say that's clearly a bug.
A
That's because you don't. Jeff. You're just saying that because you're not tracking bugs.
B
But I feel like the bugs are the easy part. It's the features that are actually more interesting to users and the things that they want to build the app with you, essentially. And that's where. Particularly for a site like Stack Overflow, when you want.
A
Yeah, to the extent that you have a conversation with your users, unless there's a literal bug that's stopping them from getting their work done, you're right, they probably do care more. You know, all the bugs that have workarounds they don't care about, but you do. You probably want to fix them, but they'll just work around them.
B
Well, not always though. I mean there's some bugs. Like for example, we had a long standing bug on pagination which occasionally the pagination algorithm will sort of number things a little bit wrong. And like every two weeks somebody opens a new user voice item about this and I don't decline it because it's a valid bug. Sometimes I'll say, okay, this is a duplicate, but it's just, it's really low priority because it doesn't break the app, it doesn't keep you from doing what you need to do. But on the other hand, we have a community of programmers who are like super anal and this is a good thing. Usually I'm not complaining because they'll keep you on your toes so they notice.
A
Those things and they just keep.
B
They notice the littlest tiny details. And that's good because I don't know. The thing I found with user voice is that it's very low friction to use, minus the OpenID thing, which obviously we'll get into.
A
So let me look at this system just for fun.
B
There's almost no input. I mean you just type something in and it does a preemptive search for you, which is the main thing. And then you're off to the races, which I love.
A
Let's go over some of the things that we have on here. We have OpenID change. Your OpenID is their number one thing right now. Subscribe and unsubscribe to questions is not should be allowed with email. You should be able to get an email. It's kind of interesting because the things that I see on UserVoice are not our actual priorities. Oops.
B
Well, you got to realize maybe these are our priorities.
A
These are our users priorities, but they're not our priorities. That's the difference.
C
Exactly.
B
Well, you're trying to find the intersection of those two things. And the other thing you got to realize is this list is really picked over. We went through a phase. How long have we been out now for? Three months almost. Gosh, somebody twittered me. It had been 100 days a couple weeks ago. We've really tried to implement all the top items on the list. So this list is pretty picked over at this point. All the really big name items like hey, you need to have rss. A lot of the really major requests have been satisfied. So we're starting to get down into the B team of requests at this point.
A
One of the things I actually kind of like about UserVoice that it taught me is something it confirmed a bias that I've always had, which is that if you ask your users for feature input, you get stuff that you probably knew about or you could have figured out, which is okay. I mean, that's not the goal of UserVoice. The goal of UserVoice, I assume, is to let people vote on it and figure out what's important and what's to prioritize those things. You very, very rarely does somebody invent something that surprises you. And what you generally don't get is the big awesome new inventions that you're going to invent. You don't really get that from the users of your product.
C
It's usually, I assume it's like tacking the ship instead of kind of plotting the whole course.
A
Right, right. So one of the biggest things we did in Fog book seven was evidence based scheduling. And you know, we never heard anything from any customer even remotely like suggesting, why don't you do a Monte Carlo simulation to figure out when I'm going to ship? You know, that's just not the thing a customer is going to tell you. Once we did it, of course, it's, you know, it's now our flagship feature practically.
B
Right.
C
Yeah. I mean, I think the real value here is putting everyone in the same room so you at least have that input. I mean, it's easier to aggregate, I think what your top priorities are within your team. Right. And the whole value is being able to, then here's top five priorities to your users and to be able to push it back to them. Like the number one problem we had was not only could I, it was hard to get who was interested in this, but then to keep track of all the people that were interested in and follow up with them individually was extreme pain. Point. But I mean, to your point, I mean, yeah, I'm not going to, I would never try to bamboozle anyone to thinking that this will help you innovate your app. It will help you kind of like see, smooth out the rough edges and kind of see where people, what kind of obvious gaps there are in your interactions and stuff like that. But I don't know, I haven't seen anything too, too game breaking at this point. But I kind of liken it to kind of. It won't do the 80% of the heavy lifting of animation, but it'll help you get that last 20% of smoothing out the rough edges.
B
And two, I think some of these characteristics you're describing, Joel, would be true of, I think any bug tracking system where you let users freely enter stuff you're going to get a lot of noise.
A
Yeah, well, it's really. I'm not. I'm not using that as a criticism. I'm just pointing out that it really is like, you really don't want to let your users drive all of your development on a product.
B
Do you want to actually do that, though? That's kind of a straw man. I mean, can you name a product where, like, they let the users design every part of it?
A
Well, I mean, that's the Agile method, right, isn't it?
B
Well, it's a priority list. Extreme programming, Agile.
A
I don't know. I think the extreme programming people would say that. Yeah. You let your users tell you everything that you're.
B
Well, no, no, it's still a collaboration. It's the users plus the development team, plus the people writing the paychecks. Right. You negotiate what the top features should be. Everybody has skin in the game, theoretically. That's the way I understand it.
A
Okay. Hey, Richard, tell us about Obama cto. Who did that? Does that Just. Somebody stuck that on there, or was that your idea?
C
That was. Well, it's kind of funny. One of the guys on the team here had an idea to do something like that. We actually did. Like, we did change uservoice.com and we got a few hundred people on there, but nothing like, I guess maybe that seeded it. Because all of last week we spent scaling out to handle insane amounts of political usage. We had, before even the Obama one, we had rebuildtheparty.com, which is like redstate.com and all the Republican side of things, like putting together, you know, putting together kind of like a site for how should we fix the Republican Party? You can imagine who shows up to that party first. The Ron Paul guys. Go nuts.
B
Ron Paul. I want to get that.
C
I was seriously. I had this. I tweeted this the other week. I'm like, Ron Paul is the new slash dot effect, like, for politics. I mean, it was just insane. He's still number one on that forum. Ideas.rebuildtheparty.com Last Monday we had. Then FARC got a hold of this. And so here's the second stage effect. Anytime someone figures out the Republicans are doing something online, I feel like FARC shows up or someone shows up and they voted up Truck Nuts for everyone was on the top of the list of that forum.
A
Truck Nuts.
C
Truck Nuts. You're not familiar with Truck Nuts, Joel? They don't have many of those in New York, I imagine.
A
Okay. I'm so out of it. I only just Found out about that puppy cam last night. That's how behind I am.
C
What is a truck?
A
What are truck nuts?
C
They are die cast brass balls for your truck.
B
Oh, I've seen those.
A
We don't even have trucks in New York City.
C
Exactly. Anyways, so that just went crazy as well. At its core, user voice is basically just ask an open ended question. People vote it up. Obviously we try to. We kind of. We're of the opinion that we're trying to use this for company customer type relationship thing, but had interesting use cases like that. The funny thing is, so then someone else set up obamacto.org and they did all the legwork of kind of promoting it and ended up on Boing Boing and like CNET News and just all over the place, which was really good for us. I mean it's positive for us to see that. Basically the value they got out of it is they kind of got a free banner ad, they slapped Obama CTO at the top of the page and the name of their web dev company front seat and they got to make themselves part of the narrative.
A
I actually thought, yeah. At first I thought, hey, that's a great promotion for user voice.
C
Right? Right. I thought you guys were doing it.
A
Yeah.
C
And it's great that we didn't have to do anything for it. Right. It's good warm fuzzy feelback for me that people are naturally incentivized to just use the platform to even promote their own stuff alongside ours. And we obviously get some referral stuff off of that. Politics is just bizarre though. I'm still, I guess no press, there's no such thing as bad press. But I don't know how much value there really is in those sorts of things. It was kind of funny the, the Republican one. We usually have no more than like 50 comments in fact followed some of your kind of, you know, I'm a big fan of kind of reinventing the message board. Actually did a prototype for AOL a couple years ago based upon a lot of your, like your whole treaty on kind of like message board design. And we try to do that to try to like you know, put the form all the way at the bottom things to kind of like disincentivize having these really just long running, you know, conversations because they just kind of degrade. Right. And on most user voice forums, you know, you never get more than like you know, 50 comments.
A
Right.
C
And that's a lot. There was 980 comments on, on the top idea on rebuild the party.
B
Yeah.
A
And it's not clear.
C
And it's just people, it's just people shouting at each other. Right. It's like there's no, there's no discourse there.
A
It's not clear that there is any good way to have a message board with 980 comments. There's one thing which people I think that, you know, I mean, you knew the Reddit guys and, and I think I tried to convince them this, but I'm not really sure if I convinced them. But a lot of times, like if you, if you watch Reddit on Friday night, there'll always be a lot of anti Israeli postings by, you know, basically, shall we say, pro Palestinian, not Reddit members, just people in general. And then they'll go on to Palestinian forums or anti Israeli forums or whatever it may be and they'll say, hey everybody, go to Reddit and vote this thing. Just click here, just click here. And it's not native Reddit people, it's just sort of a swarm of pro Palestinian people. And the Israelis are doing the exact same thing. The only difference is that on Friday night, Saturday morning, the Israelis are all in synagogue and they're not allowed to use their computer because it's Sabbath or at least the religious Jewish Israelis. And so suddenly the balance is broken and the Palestinians, all the anti Israeli things sort of float to the top and read it for the first time. But what I think.
C
Internet TED offensive.
A
Yeah. And I don't know if they're actually timing this and they're specifically doing it on, you know, the Jewish Sabbath so as to reduce the number of people that will vote against them or if that's just a coincidence of, you know, suddenly these things tend to rise to the top on Saturday when there's, there's no Orthodox Jews voting against them. But what I actually believe is that when you do something strongly political, you get a lot of visitors coming in that are not actually your native audience. So it's not like the average Redditor is anti Israeli or pro Ron Paul or anything like that. It's just that those particular political issues include those are communities that are these mass roving communities that have their own places on the Internet. And they'll say, oh quick, go vote in this poll here and then go click on that and go vote for this here and vote that up on Zig right now. And so they sort of swarm in and kind of overwhelm any native community that may be behind and make it look like.
C
Yeah, I mean, that's exactly what happened with the Ron Paul and then the FARC people and, you know, all you can do is kind of let them have their day. And then I think the next day that we did the truck nuts thing and you just move on. Right. There's really not. Fighting that battle is usually not worth it. You just kind of, all right, have your kind of your little victory there.
A
As long as the native participants recognize that those aren't the same people voting up those stories, those politically charged issues, whatever the political charge, you just have to understand that those people voting on those politically charged issues, that's not the actual Reddit community, so to speak. You know, some of it is, but some of it is a visiting.
B
But isn't the political stuff usually sort of kept in its own area? I mean, you try to bottle it and contain it so that it doesn't.
A
Yeah, I think spill over some capability of doing that now.
B
Yeah, that's true. I mean, I would not even consider clicking on the political Reddit.
A
But some of the positives, yeah, some of them do go on, do show up on the homepage.
B
I see. Yeah. Because I mean, political stuff, man, it's just, it's kind of a no win scenario a lot of times. And I try not to even go there if I can avoid it.
A
So don't go there.
B
One thing I wanted to compliment uservoice on, and early on, UserVoice was very influential in my thinking in terms of how I wanted to stack Overflow's UI to work. And I want to really compliment you. Like, I really like the user voice ui. Not that it's perfect, but we really did ape a lot of the things in user voice because I liked them. Like your tab structure and the little notification bar at the top of the screen when you come there for the first time. And you're historically a UI guy. Is that what you said?
C
Yeah, I would say a converted UI guy. UI is kind of like this intersection of the two churches of design and dev, I feel. But yeah, I consider myself one of those cast offs.
B
Right. So, yeah, I think it really works from a UI perspective. I mean, I found it very appealing and very easy to get started. And imitation is always the sincerest form of flattery. Now, to Joel's earlier point about going through a lot of this information, like, I actually do, I've kind of fallen off the wagon recently, but I actually go through pretty much every user voice request that comes in. And you're right, like 90% of the time they're not really telling you anything useful. I mean, they're trying to, but it doesn't work for a variety of reasons. Like a, you've seen it, it's a duplicate, but they didn't know that. Even though you guys have a built in search and all that stuff. They still do it, obviously. Or it's just not really helpful or interesting. But what I find with UserVoice is the reason I have to go through every single entry that comes in is about 5 to 10% of the time you'll get some really good suggestions that you hadn't thought of. It's rare, it's definitely rare, but it makes reading those other 20 user voice items worth it because that one actually does influence your thinking. Now, to your point, Joel, it's not usually some massive generational feature of like, oh, you should do Monte Carlo simulations to determine when the product's going to ship, but it'll be like some really nice tweak to the UI that you really didn't think of that after reading it. You're like, wow, we totally should have been doing that. So I just want to make sure there's a conduit for people to get that information to me because they're essentially developing the product for me or improving it for me. Our team is really small, so there's a limit to what we can just sit around and dream up on our own. So it helps to have sort of an influx of just random fresh ideas as long as you can afford the cognitive burden of going through all of them. That's definitely one way I use user voice.
C
I love your usage of the decline status. Actually we put it up there for a while ago. A lot of people are a little hesitant to touch that. Right. If we had kind of like a hierarchy of how transparent are you with your users? Most people aren't transparent enough to actually say I'm going to decline what you say. Actually, I've got the numbers. You've declined 48% of the ideas on there, which I think is awesome actually, versus we've only declined 3%. And Joel, the Copilot site only has declined 5%. So I think that's a good thing. That shows you actually like those Copilot guys.
A
They're not going to do anything. Forget it. Just assume that it's declined.
B
Well, no, no, I do want to talk about this because I actually talked to the Microsoft guides about this. I actually met with the Codeplex people at PDC and we actually had this conversation because Codeplex, I don't know if you guys have been there, has a similar sort of voting sort of methodology to their suggestions and they actually end up it's very uservoice, like, it's not the user voice ui, but it's very similar. And we were sort of contrasting that with. You guys. Ever heard of Microsoft Connect? It's like their external bug tracking system where, say, you have a problem in Visual Studio, you could go in and actually enter a bug on Connect about the problem that you're having. And of course you need a repro. And they actually do look at this stuff. But the running joke with Connect is that you'll enter a bug that is legitimately a bug. You can have a repro, I mean, the whole nine yards, and nothing will happen with it for years. People will come in and look at and say verified or not verified pretty quickly, but then just seemingly nothing happens.
A
Yeah.
B
Like, let me give you a little pet peeve example. Like in Visual Studio, say you're developing a Windows app, the font that it defaults to is not the system font for Windows. It has some weird algorithm where it picks like Sans Mono or some crazy font that's not. You would think it would pick the default font.
A
That's because the default for the operating system is extremely complicated.
B
It's actually not the whole business about.
A
When to use Tahoma and when to use Lucida and when to use Verdana. Verdana Verbana.
B
I can point you to blog posts that point out sort of the absurdity of this, but it's just an example. So just humor me and pretend like it's a valid example.
A
Yeah, it is.
B
So you'll enter a bug and it'll sit there for years, somebody will verify it's. Oh, verified. And then eventually, like literally years later, it'll get changed to won't fix. And then you're like, well, why did I have to wait four years or three years to figure out that you're not going to do anything?
A
Well, you had to wait for that intern, that summer intern whose entire project was going through and won't fixing all of the bugs.
B
Yeah. I mean, so how is that actually better? So letting people just enter stuff and letting it languish, how is that better than telling them up front, look, we're probably not going to do this.
A
No, I think you're right just to say, look, we're probably not going to do it. This. I think that's very nice.
B
I invited Richard to find an item on Uservoice or Stack Overflow that he wanted us to talk about as well. I don't know if you did that, Richard.
C
I Looked through some of them, but I don't have any great ones. Actually. We're just going to talk about your reputation stuff. But that could be a really long.
A
We can't talk about our reputation stuff. This bores the hell out of people. We can't talk about that anymore.
C
See, that fascinates me.
B
We can talk about that offline, no problem.
A
Deeply into the ground. In fact, here, let me play a user question.
C
Hi guys, this is Chris Conway. I'm a graduate student in New York City. I really enjoyed the podcast and also enjoyed using stackoverflow.com but I'm kind of wondering after 26 episodes of the podcast and sort of nearly endless discussion of sort of navel gazing and how to tinker with the pain regulation of the reputation economy, if you're ever going to sort of take a turn on the podcast to less self reflexive discussion?
A
We are right now and we're going to take that. We're going to wait. Stop. Shut up. Screeching VINYL sound cutting OFF Chris. Because right now we're taking that left turn and Jeff and I have talked about this, so maybe we should tell our listeners is that, you know, one of the things that we're going to do a lot more of a lot less talking about the design of Stack Overflow because that's done, and a lot more talking about things that are actually on Stack Overflow, which are just sort of generic, generically interesting programming questions. And I think the new rule for Stack Overflow podcasts is that everybody on the show has to come up with one question from Stack Overflow that they want to talk about. Just pick something random. So here's mine. Let's pick one. Here's one about reputation.
B
Awesome.
A
What's a good algorithm for reputation on social. No, here's one. Actually, there's a surprising number that are saying, what tricks do you use to get yourself in the zone? And it's now a community question. What does that mean? A community question means it was edited by a lot of people and it's now kind of.
B
Well, there's a couple things. You can opt in to Community at the start.
A
Right. And I can't even figure out who originally wrote this, can I?
B
You can click on the revision history, which is the date where it says edited in the date. Click on that.
A
Yeah, so this was originally asked by. I don't know.
C
Drumroll how do I.
A
Is it like the one that says seven? That's the oldest.
B
That's the oldest. The one that says one is the original revision.
A
Oh, the original revision. So it's originally by Tim James.
B
Yeah, but this is a pretty open ended question. It's appropriate to ask Unstack Overflow because it's, you know, it's a programming question.
A
It is sort of the number one reason why people don't get anything done is because they're just playing around on the Internet and they just can't get started with an actual programming task. It's the how do I get my editor to open my web browser to close problem?
B
Well, I find that if here's what works for me and I'll talk a little bit to me, writing a blog post and writing a program feel similar. Like you sort of, you glom on to some meaty part of the problem. It doesn't have to be the beginning of the problem, it doesn't have to be the end of the problem, just some interesting little part of the problem and you start working on that and that sort of spools you in and draws you into the rest of the code and it makes you want to flesh out the stuff around it. And this is one thing I tell people who get stuck is like, I don't know where to begin. It's like, well, start anywhere. It's kind of a glib response. Start anywhere, but start where your heart leads you. If there's something algorithmic, let me use Stack Overflow as an example. One thing I knew that was going to be really hard on Stack Overflow was dealing with editing HTML because we wanted to have really rich editing capabilities. But that's a double edged sword because of all the exploits and stuff. So that's where I started. And it's a super rich area because you learn about cross site scripting exploits, you learn about parsing HTML, which is really hard and it just leads to a lot of interesting places.
A
So you're saying you might as well start with something that's interesting and fun.
B
Well, not necessarily interesting and fun, but something that intrigues you, something that leads you down the path, right, that you can actually get excited about and then that's your gateway drug to the rest of the problem. That's how I.
A
That's a good way of thinking. Even if like I've had a lot of success with just biting off a very small chunk. So you say to yourself, all right, this is a big problem. I'm not going to be able to do all this today, but I could at least find the spec for what I'm going to do and open it in my browser or at least I don't know, at least boot up my. At least I could open the project in Visual Studio or whatever it is. You just pick some very, very small piece of that and just kind of commit to just doing that. That usually leads me into. That usually leads me into doing what needs to be done right.
B
Like sometimes using again blog posting example. I'll go collect images for a blog post and that helps me get started. Like I know I'm going to need relevant images for the blog post and I'll do some research on the images and images is fun, right? It's easy and fun and I at least have a start. I can point to a shell blog post that has images.
C
Yeah, I start with fun. I have to pick something. I have a list of things that are like things I want desperately to do that are nowhere near the top of our stack of important things. It's my guilty pleasure to get me into the app, right? And then I'm in there.
A
Wait, is this a real list? Like you got this in Notepad or a text file?
C
I use something called todoist. But yeah, it's a classic. Just todoist of things. And there's like, here's the things. We're all, I think, eating our own dog food, right? So it's just like these are the things I just want because they're cool and they're usually for me, I'm a stats guy, so it's usually like writing. So it's your stats. Like the stats are this morning to figure out what percentage of things were declined and stuff like that. So yeah, I need gateway drugs, basically.
B
Well, plus Richard, you know, they code in Ruby so they're already having more fun than we are.
A
Oh, you guys, did you really Ruby Rails?
C
Ruby on Rails? Yeah, everyone doesn't do that now. What do you mean? I thought all the cool kids did or something.
B
Yeah, all the cool people.
C
It almost writes itself, you know, it almost crashes itself too.
B
Well, actually you joke about that, but you said that your app is what, like how many lines of code? 6,000?
C
Not quite 5,500, I think. That doesn't include plug ins and stuff that's like in the vendor directory. But yeah, the meat of it. And that's also about half test code.
B
Wow. So half of that is test cases. That's definitely not true.
A
For the stack overflow code, we have what, 99.1. What percent? What percentage of stack overflow is test?
B
Pretty much, yeah. We've kind of fallen off the test driven wagon a little bit, but we do plan to get back on when Jared. Hopefully Jared can be full time in January. That's the current plan and that's something he wants to focus on.
C
Don't confuse me with one of those like crazy TDD Rails guys, because we're definitely not that. We just, we smoke test the crap out of certain things like APIs and really high level stuff, and then we're really bad about. Really, our coverage isn't nearly as good as our, like, our ratio of test to test to real code. So.
B
Well, the impression I got with Ruby is that you kind of have to do it because it's dynamically typed. So the compiler, the compiler is not.
A
Going to catch anything.
B
It's not going to catch anything. So you have to write tests because otherwise you literally have no idea if your code's going to work until you deploy it. Which, to be fair, is true of, I think, any code. I mean, there's all kinds of bugs you can have that have nothing to do with the compiler. But there's not even the compiler safety blanket, which is. Some people refer to it. Right. You can assign a string to an integer or something like that.
C
We have the highly, the highly engaged user test as well.
B
That's kind of what we do on Stack. I mean, honestly. Wow.
C
I mean, honestly. Joel might be horrified by this, but we do things, some people call it very fast and we sometimes just like roll things out. I think the good thing about Ruby and Capistrano stuff and all the emphasis on having really good deployment tools is that it's really easy to like oops and take care of those things. At least the phase we're in right now.
A
Yeah, it's really a matter of the phase. It's really like your first phase. You don't have a humongous number of users. They're pretty sympathetic, your early adopter users, no matter what the product is. Your early adopter users are precisely early adopters because they are so desperately in need of the functionality you're giving them and they'll forgive an awful lot. So it really becomes different when you start to have established users. They've got all kinds of things, they're depending on you every single morning to come in and have that thing work because it's now such a crucial part of their life. And that's sort of a later phase when you kind of need that higher level.
C
We just started signing up for a few paying customers, so that phase might be coming to a close sometime soon. So it will be missed. It will be missed, certainly.
A
Wait, why do people pay Again, what.
C
Do they get if they kind of like it's. We have kind of like the stealthy like enterprise thing we're about to launch. It's just integration stuff, some single sign on things, custom design, API access, all the kind of integration, moderation, all the things you can imagine that are absolute blockers for larger companies to use something like this.
A
So this would be for a company that wants to use it to communicate with their custom, their customers externally.
C
Right. Kind of like think of like a completely white label version of user voice that's integrated with your stuff. Yep. Cool. So gotta make money in this economy, right? In this market. You gotta get to get there quick too, right?
B
Well, I think that would definitely work because almost every week, literally I get requests. Oh, we'd like a branded version of Stack Overflow. We really like the system. Etc. Etc.
A
So yeah, we always turn that one down.
B
Yeah, we haven't really been able to go down that path for a variety of reasons. But I can see, I mean I can just.
A
Homestar Runner. It's not Homestar Runner. Right. Who's the guy? The Mexican. The boxer. The Mexican Boxing.
B
Yeah, that's the guy.
A
That's Homestar Runner. No, Homestar Runner is that skinny guy without arms. Strong Bad.
B
Yes, Strong bad.
A
I can just see Strong Bad saying declined. Except you said deleted. Yeah. Okay. Another question from Stack Overflow, Jeff, you want to come up with one?
B
Well, actually I have one because I did a whole blog post recently on the whole NP complete thing.
A
Oh yeah, let's talk about that.
B
And there's actually quite a bit of action around NP Complete on Stack Overflow. Like I'm looking at one that says, what is an NP complete problem and why is it such an important topic in computer science?
A
Oh, that's a good Stack Overflow question. Very canonical.
B
Yeah, it is. And I think the thing I struggled with on my blog is I try to be concise and sometimes my conciseness gets the better of me because I'll summarize like an idea in a sentence that I probably should have given like two paragraphs to explain what I was thinking and what I was actually trying to say. And people really objected in the post that I made to the statement that nobody really knows what an NP complete problem is. And that is the way I said it is kind of wrong. But let me be clear about my thinking here, is that what I was referring to is the P equals NP problem, which is so a little bit of background on this. So NP complete basically means nobody has a good algorithm for solving this problem.
A
Like other than exhaustive search. And it's. And it's. And it's going to grow geometrically.
B
Yeah. It takes forever. Like the only good solution the smartest people in computer science can come up with is, you know, try every possible solution. So that's.
A
Yeah. And it has to be combinatorial. So it has to be like there's N factorial possible solutions. So whatever algorithm you come up with is going to work for five cases, you know, for five inputs, but not for 25.
B
Right, right. And the thing I was trying to say, poorly, is that the reason we call them NP complete is nobody has proven that they can solve it in polynomial time. In other words, nobody has our brightest minds in computer science. Nobody can come up with a better algorithm than, you know, like, you know, N factorial or even N cubed. For a lot of times it's pretty bad. Right. I mean, if N cubed is the best solution we can come up with.
A
Yeah, but that's not NP complete. That's still not.
B
Yeah, yeah. So it becomes a definition. Yeah. So what I was really trying to say is that, you know, it's this weird sort of definition where you just throw a bunch of smart people at it and they all agree, yep, it's NP complete. And there's, there's nothing saying that another super smart person couldn't come up and say, you know what, I can solve this in N squared.
A
Well, there kind of is, because they've. What you've proven is that if they, what you prove to prove that something is NP complete is you say, if you could solve this problem, then you would also be able to solve every single one of these other problems that have all been called NV complete in less than polynomial time. So that would be pretty frigging awesome. But I don't think you're going to do it, so just forget it. It's not like we're waiting for the proof to format last theorem or something like that.
B
Right. So it's just one of the things that's proven by exclusion.
A
No, you prove that if you can solve. You almost always prove if you could solve this problem, then you would also be able to solve that problem. And we know you can't solve that problem. If you had an algorithm. If you had an algorithm for the knapsack problem, then I could use your algorithm for the knapsack problem. Let's say you told me that you had a n log n algorithm for the knapsack problem, then I could use it to solve the traveling salesman problem. I could use your algorithm to solve the traveling salesman problem because that's because all the NP complete problem problems are all interchangeable, basically.
B
Yeah, they're all equivalent. Yeah, I get that. But there's a recursiveness to that definition that I, that I have difficulty with. Right. Like if you can solve this one, you can solve them all. And I realize they're reducible. Right. They're similar problems, but I find it sort of a non definition on some level. It's useful because you're basically defining extremely difficult problems in computer science that sort of. You're on the edge of computability where these problems really just aren't amenable to being solved with computers. Right. I mean, you can verify the solution.
A
Okay, I see where you got in trouble here. Yeah. First of all, they are amenable. There are often shortcuts that get you what may not be the best answer, but you get something that's pretty darn good.
B
Right, right. But again with the computer, you're typically used to getting the best possible answer. Computer gives you. There's nothing in Excel that gives me a formula that's kind of correct. It's always correct. Right. I mean you're doing math. So computers are math made circuitry. So I think that's the intriguing thinking about them.
A
Okay, so wait, where do we go with this?
B
Where do we go? Well, basically I just want to talk about it and I wanted to illustrate that. Actually there's a really good discussion about this on Stack Overflow because I was using Wikipedia and to be honest with you, what I found on Wikipedia with NP complete and NP hard, there's actually an actual sentence in the Wikipedia article that says the NP naming convention is confusing. That's actually codified in the Wikipedia entry because there's like NP hard, there's like all these variants of the terms that I found very perplexing and that actually reading through the discussion on Stack Overflow is I think in some ways more illuminating than the Wikipedia entry, which, you know, people always say, well, why post things that are on Wikipedia? Just go to Wikipedia and look it up. And I think if what I'm trying to say is that the way some people explain things is actually to me clearer than what's on Wikipedia.
A
A lot of times, you know, there's a harder class of problems than the NP class that are even harder to solve with a computer.
B
There's another NP name for that, and I can't remember.
A
Well, there's the problems that are the equivalent of the halting problem. And so the halting problem is, given an arbitrary program X, will the computer ever get to the end statement at the end of that program for an arbitrary program X? And the trouble is, and the halting problem is, there's a proof that you can't figure that out even in NP time. It might take until the end of the universe to figure that out. You know, even if you had infinite computing resources, you might still not know if the program is going to end in infinity plus one time. Like, you know, maybe it's just there's some really slow calculation that's going on there, and eventually it will halt. And so there. So there is actually a class. And that. That's kind of an interesting one because all kinds of things can be shown to be equivalent to the halting program. Like, will code ever get to this particular line is equivalent. So you can't, for an arbitrary body of source code, you can't actually correctly determine statically through static analysis whether or not a particular line of code can be reached for an arbitrary program. You may be able to figure it out for many existing programs for a particular program, you may be able to find out if a particular line of code will ever be reached, but you can't do that for arbitrary programs. So that's even harder than NP complete.
B
Right? And I think you started talking about the approximation and the heuristics, and I think those are really interesting too, because you're taking these really difficult, theoretically unsolvable problems and just coming up with these really clever hacks basically to get around them.
A
And approximately, I mean, a lot of the times it may be something where, you know, it's like, yeah, there may be a slightly better way of doing this, but not in our planet, not on our planet. You know, like, it may be reasonable to assume certain things about the actual world.
B
And then I wonder too, is like, have I ever really attacked an NP complete problem in my actual programming? I mean, I. It's interesting. It's definitely good to know about it because again, we're skirting the edge of computability. Like, I think it's useful for a working programmer to know these are the hardest programs or hardest problems in computer science algorithmically. And you should probably know, if you do happen to run into one, you should probably be able to recognize it. So, yeah, that's kind of what I was getting at with my blog post.
A
You know, anybody who works on language tools often hits the halting problem. Not necessarily. Like people that do static analysis. Like, I want to Be able to analyze the source code of a computer system and tell you that there can never be, for example, a deadlock between the threads. That would be really useful if you could do that kind of analysis.
B
I just found a stack overflow question on the halting problem. It says, when have you ever personally come upon the halting problem in the field?
A
Yeah, no, I actually did. Because a lot of times, even something as simple as IntelliSense, where the editor. I'm using IntelliSense broadly, situations where your editor is looking at your code without compiling, without running it, and giving you some ideas about something about your code, trying to tell you something about your code that could be a definite instance of the halting problem.
B
There's some really good responses here. Just to piggyback on what you're saying. Jason Cohen says sophisticated static code analysis can run an halting problem.
A
Right, Right, right.
B
If a Java virtual machine is trying to prove that a piece of code will never access an array index out of bounds, it can emit the check and run faster. But it's not always possible, depending on the complexity of the code, to determine if that's true.
A
Right. And you can. And those are all things where you just. Eventually you say, you know what? I know not to try to solve this problem. You could very much be tempted to just say, if I think about it hard enough, I can unwind this loop. I can make a structure of all the possible ways that code can call other code and all the possible ways that, let's say, threads can interact with other threads. And then you suddenly realize that if you could solve that, you could solve the halting problem. And you say, okay, wait a minute, that couldn't be right. Don't go down.
B
This is funny. Check this out. Just some guy says, I literally got assigned the halting problem, as in write a monitor plugin to determine whether a host is permanently down.
A
Yeah, seriously.
B
Right. So I'll just give it a threshold. No, because it might come. Come back.
A
Come back up after the threshold.
B
Yeah. Much theoretical exposition ensued. So there you go. There's an example of, you know, NP completeness being actually relevant to a working.
A
No. Halting problem. Different than NP completeness.
B
Oh, halting problem. Sorry.
A
Much harder. Much harder. But also still solvable because there are actually static analysis tools that still provide very useful things. I should mention there's something. Boy, I wish I knew the name of this thing, but somebody at Microsoft Research has like a debugger add in of some sort where you give it some code and it will actually find Those Heisenbergs for you. Do you know what this is? We'll look this up later and we'll put a link to it in the show. Notes. I heard it on Scott Hanselman's podcast. He has a podcast where he went to the Microsoft Professional Developers Conference and walked around and talked to a bunch of the people doing research at Microsoft Research. And one of them had the problem that you have. Let's say you write some multi threaded code. Take the simplest possible example. You have two threads and they're running and once in a blue moon, you get a crash and you know that there's a crash in there and you just don't know exactly what timing is going to cause it. And there are obviously infinite possible sets of timing for code, for the code to run. Like how the processor allocates time slices to the different threads. There are millions and millions of possibilities, billions. I mean, it's. Well, it's NP complete. It's not quite the halting problem, but that's a classic. It's NP complete to actually be able to. And so there's no reasonable way that you can actually test all possibilities of ways that threads can interact, which makes it very hard to find these kinds of weird bugs where, you know, if this particular timing happens and, you know, it's just strange.
B
I'm reading through this as I'm listening and I really should have started on stack overflow with my research on some of this stuff. I mean, I guess it's a testament to the community, but I don't know. I did not find the Wikipedia articles particularly illuminating.
A
Yeah, they're very. Computer science Encyclopedia of Computer sciencey.
B
Yeah, they're just not full of, like, practical examples that as a working programmer I can point to and say, okay, I know what you're talking about.
A
So to go on with this threading, this threading thing, you've got these two threads running and they actually, that is definitely INP complete. But what they said is, they said, you know, what chances are if you're having a crash? It's too particular. There's a single, you know, let's just start with all the different possibilities for when the computer can decide to switch from task A to task B. And let's just test all those because it's actually almost always. You'll find the majority of these bugs are going to be in a fairly simple situation of just a particular task switch happened from one thread to another on this particular instruction. And then you only have to investigate all possible instructions during which a task switch might happen. You don't actually have to consider every possibility of scheduling all possible threads. You will probably uncover most of your bugs just by trying every possible time to see if you can do a time switch. And there are certain ways to even create, to figure out what the barriers are like. Well, here are all the places in thread A and here are all the places in thread B that are accessing the same state. And so let's make sure that we've tried task switching in every possible side of each of these state accesses. And that's all you really have to do to probably reproduce a bug.
B
Right. I remember reading, I actually paged through the. They have a giant list on Wikipedia of like every possible NP complete problem. And I remember task switching was one of them.
A
So. But that's interesting though, that you can actually, you could solve these problems in a pretty good way by exploring the much larger problem space. Sorry, the much smaller problem space. There are always substantially smaller problem spaces that, in which your solution might, might well be found. And that's one way you can usually go about attacking many of these kind of problems. And what's interesting is there are other problems that are not quote unquote computable, like handwriting recognition, that have nothing to do with computer science. Like there's absolutely no theoretical reason why handwriting recognition couldn't work. It just doesn't.
B
Right. Yeah, well, I think that was the other thing people remarked, is that, you know, there's a lot of really hard problems that aren't necessarily NP complete. And why do we have this arbitrary distinction between, okay, this is an NP complete problem, so it gets a special designation.
A
Right.
B
And we haven't been able to solve handwriting recognition either. That's super hard, isn't it?
A
So, yeah, I guess the best way to describe NP complete problems is. And this isn't. This is just really like intuitive and not computer science. Y But a lot of times the. When the problem is I need to find the optimal way to organize things, to arrange things, and they can be arranged in any arbitrary way, and I need to find the best way to arrange them. Then surprisingly, when you start to hear that, you might start to worry about, oh, this might be NP complete, and then you might actually try going through the effort of trying to see if it is NP complete by doing the little proof and seeing if it's the same as the knapsack problem or whatever. And then once you've determined that it is, then you know not to try to brute force it.
B
Well, do you know the XKCD cartoon, the one about the menu items. They're looking at the appetizers on a menu and they challenge the waiter. Okay, give us five appetizers that add up to $15 and five sets of cash. Exactly. Get back to us when you're done. So. Yeah.
A
Okay. What else should we do on our podcast?
B
Well, I think we're probably at the limit now.
A
There was a lot of talking at the beginning that didn't really. We could probably go another five, ten minutes.
B
Okay, well, Richard, did you have anything else that you wanted to bring up?
C
Interject. I'm glad I got through with that. We skipped past the OpenID thing, so I'm safe. I got nothing, man. I'm spent.
B
Can we commit publicly to OpenID on UserVoice on this podcast? Is that what I'm hearing? That's what I heard.
C
Is that what you heard? That probably gets edited out, right? No, I had a call with the generating guys like an hour ago just for you.
B
That's awesome. Well, a little bit of background. So remember how, Joel, we were wondering, well, how does myopenid make money? Remember that question? We now know the answer to that question.
A
Oh, good.
B
They have service RPX now. Is that the correct name?
C
Richard Iboys. Yes.
B
Yeah. RPX now. And it's a pay service that basically greatly eases the implementation of OpenID.
A
So if you want to be an OpenID provider, you pay them, Is that what you're saying?
B
I don't know.
A
If you're a provider, you want your accounts to be.
C
Just if you want to be a consumer, actually, even a consumer is.
B
Yeah, I think it's just you're a consumer. But it's like super, super user friendly. Like, they really nailed the user experience. And Scott Hanselman had a lot of really positive things to say about this.
A
I'm confused. What kind of consumer even knows what OpenID is, let alone is going to use it?
B
That's my point with RPX now. They don't have to think about it. They just click on the Facebook icon and then they just log in. It becomes magic at that point.
A
So do the consumers have to pay to be able to click on the Facebook? I'm so confused.
B
No, no, no. Somebody we would pay.
C
User voice would pay. A consumer in this scenario is the consumer OpenID, which is the site User voice would pay them to be able to have it. Have an actual usable OpenID experience. Right.
A
Oh, so they have a bad OpenID experience for everyone else. Why can't. Isn't this just like Sorry, isn't this just a GUI widget that anybody can make? And then there'll be some open source one and everybody will use the nice open source GUI widget and problem solved.
C
I would assume eventually, once there's actually some convergence around. Like what the. An actual market leader in the space, as far as you know, is it Google's OpenID or is it so and so's OpenID or. I don't know. I'm more than happy with Jeff and I had this discussion. We're just a little weary of, you know, third party people being intermediaries. I'm actually totally opposite. I'm more than happy to let someone else. This is like such a temporal problem to me of like the next few years it's going to be a mess. I'd be more than happy to give someone else this mess and say solve it.
A
Right?
B
No. And to their credit, I mean, myopenid is a stunning example of doing it right.
A
Why is it a service and not code?
C
Why is it a service and not code? They do have OpenID libraries you can use as well. Just, I don't know. Everything's a service, Joel, come on.
B
Dogbugs is a service, man. What are you talking about?
C
Everything is a service.
A
Okay?
C
Every. Every brand newer thing.
B
I mean, you sell dog bugs both ways, right? Why can't myopenid sell this RPX down both ways?
A
Okay, all right. It's too complicated. I don't understand it. I barely even understand OpenID.
B
Yeah, so Janrane had contacted me because We're a big OpenID client and I had done some publicity for them and they offered us a free trial of this RPXNow plus and I asked them to transfer it to UserVoice because that's one of our big wishes. And actually, if you go to uservoice.uservoice.com a little bit of recursion there for you.
A
Is there a uservoice.uservoice.uservoice.com, where you can.
B
I've sent people there many times.
C
That's our team.
B
Yeah, it is the number two most requested feature, OpenID user authentication. So anything we can do to help and make that easier and get that done is awesome. So I'm encouraged and looking forward to the results.
C
Luckily, apparently that freebie you got is not bind on pickup, so it looks like it does transfer.
A
So.
B
Nice little World of Warcraft.
C
Little shout out to my friends there. There you go. Yeah. So, yeah, I hope that'll be coming soon. We've been trying to do that for forever. I have such mixed emotions at OpenID. And that's another third round, another religious thing that I'm sure nobody wants to hear. And we talked about this before. That's the same discussion every single time.
B
Either love it or you hate it and it becomes a little religious. But I think having it as an option, I mean, one thing people object to on Stack Overflow is it's the only option. I think people find it less offensive. The people that react negatively to it find it less offensive if they can do the old thing that they're used to and have the status quo alongside something that's actually better than the status quo.
A
Right, right.
B
I think that's what you guys would probably do anyway, right?
C
Yeah. As a UI guy, all my criticism of OpenID is all about user experience. Like Joel said, who even knows what it is? And I think it's funny that the people that have been most successful with it recently have not even told you that's openid. It's like a bad brand at this point.
B
Yeah. Well, hopefully the RPX now can help. And like I said, I totally agree. UI is hugely important, and many companies have really dropped the ball on this, notably Yahoo, not to mention any names, but Yahoo.
A
Yeah. And Google, which pretended to be doing OpenID and wasn't.
B
They changed that. They're actually now doing the real OpenID.
A
Oh, really? Like just in the week since they announced that.
B
Yeah, there was a lot of. There's a big hue and outcry and they change direction.
A
Yay.
B
Yeah. So good guys win. Awesome. Don't be evil yet again.
A
All right.
C
Don't be overtly evil.
B
Yeah, don't be overtly evil. Nice. Don't be evil in a way that people can see. That's, you know. Yeah.
A
Okay.
B
Well, we're probably at the limit now.
A
We are pretty much at the end of our weekly podcast. You've been listening to Stack overflow podcast number 30. Special guest star, Richard White from uservoice.com you can visit his website at uservoice.com.
C
Or uservoice.uservoice.com or UserVoice. Never ending hole.
A
Do you have a blog or, you know, any. Anything you want to plug, like a. Like a place to find you on the Internet.
C
I used to blog at height1percent.com, which is. Don't ever name your blog after, like, a CSS trick. That's just really bad idea. So I don't really write that much. I also Twitter. I'll plug other projects. Yeah. Twitter RR White RWhite. That's right. Two Rs and a white.
A
Two Rs and a white. The color.
C
Yeah.
A
Not the aisle. Terrific. What else? There's a wiki. We have a wiki of our show where people are invited to write down transcripts of this show to transcribe the show for the hearing impaired. And that is always linked to from the show notes@blog.StackOverflow.com we need some listener calls. I have a couple. We didn't get any listener calls in the last week, so this is ghastly. So listeners, please call. This is done by either calling our phone number, 646-826-3879 and that's the Stack Overflow podcast hotline. And you can call that number, record a message and we'll play it on a future show. Or you can just record an mp3 or ogg vorbis file and email it to podcasttackoverflow.com See you next week.
B
See you next week. And thanks Richard.
C
Thank you.
A
You've been listening to Stack Overflow with Jeff Atwood and Joel Spolsky. The Conversations Network is a 501c3 nonprofit and we need your help. For a tax deductible donation of as little as $5 per month, you can support this channel and the rest of the Conversations Network. So please visit conversationsnetwork.org to become a member and help us continue to bring our programs to the world for free. Our audio files are delivered by Limelight Networks, the high performance content delivery network for digital media. The post production audio engineer for this program was Joel Spolsky. Our website editor was Jeff Atwood. The series producer is Jeff Atwood. This is Phil Windley. I hope you'll join me next time for another great presentation from Stack Overflow here. On it, Conversations.
Released: April 19, 2011
Hosts: Joel Spolsky (A), Jeff Atwood (B)
Guest: Richard White (C), Founder of UserVoice.com
In this episode, Joel and Jeff sit down with Richard White of UserVoice to explore how modern software teams track bugs and features, the pitfalls of user feedback systems, and the influence of community-driven tools like UserVoice and Stack Overflow. The trio covers the philosophy of product development, the challenges of interpreting user requests, community dynamics in voting systems, and takes a deep dive into the difference between bugs and features. They also discuss real-world programming challenges like the halting problem and NP-completeness, providing engaging insights for developers of all backgrounds.
[01:13 – 05:35]
[07:42 – 13:47]
[10:18 – 13:26]
[14:26 – 21:18]
[21:43 – 24:08]
[24:08 – 24:44]
[28:02 – 33:15]
[33:15 – 36:29]
[36:08 – 36:46]
[37:29 – 53:04]
[53:15 – 58:55]
Richard White on Feature Voting:
“So the goal is to kind of flatten out that. Kind of flatten out that priority a little bit better ... It's a currency of influence, if you will.” [03:40]
Joel Spolsky on Innovation:
“You very, very rarely does somebody invent something that surprises you ... You don't really get that from the users of your product.” [12:08]
Jeff Atwood on Declining Feedback:
"You've declined 48% of the ideas on there, which I think is awesome actually." [24:08]
Richard White: Pacifier for Complaining Users:
“It's clearly...a pacifier, both for users and for people like ourselves running things. It's a much better record than message boards or blog comments or emails or whatnot.” [08:03]
Joel, on External Influence in Voting Systems:
“What I believe is that when you do something strongly political, you get a lot of visitors coming in that are not actually your native audience ... mass roving communities.” [19:32]
Jeff Atwood, On Getting in the Zone:
"Start anywhere, but start where your heart leads you. If there's something algorithmic...that's your gateway drug to the rest of the problem." [31:15]
Richard White, On Ruby on Rails:
"It almost writes itself, you know, it almost crashes itself too." [33:22]
This episode is a deep, discursive look at the challenges of building platforms that not just track but interpret user needs in a modern, participatory way. Highlights include sharp, candid discussion on democracy in software, practical considerations for crowdsourcing feedback, technical asides on computer science theory, and the real-world challenge of managing both users and their expectations.
“The value is in putting everyone in the same room so you at least have that input.” – Richard White [12:29]
“Imitation is the sincerest form of flattery.” – Jeff Atwood on borrowing UI ideas [22:24]
If you’re interested in where community-driven software is headed, and how to balance user input against vision, this episode is essential.
For more, visit uservoice.com or check out the full archive at blog.StackOverflow.com.