incubator-general mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Rich Bowen <>
Subject Re: Apache Metrics, Not Apache Humans
Date Wed, 18 Nov 2015 17:16:29 GMT

On 11/17/2015 08:13 PM, Marko Rodriguez wrote:
> Hi,
> I suppose the distilled intention of the proposal is to identify the answer(s) to the
following question:
> 	What makes a "good" open source project?
> As I read on general@ and from our project's mentors, "good" is grounded in personal
experience (i.e. anecdotes). Why not use the data you gather to quantify "good." This way
its not a "well I believe," its more of "in this particular situation given these variables,
there is a X% success rate." Also, it leads to more exploration -- "Huh, I don't think I've
ever seen an Apache project do it like that -- hell, give it a try and lets glean the stats
from it. If anything, we learn."
> Personally, I'm all about: "Do whatever you want." (try it -- who cares as long as its
legal). However, if there must be structure, perhaps The Apache Way is a local optima? Only
a broad statistical landscape will tell. And only in data gathering and analysis will that
landscape be constructed from the numerous, diverse social/technical experiments -- i.e. Apache
projects!  Without the openness and computational introspection, Apache podlings will simply
internalize the objective of "just do as expected and graduate." The problem is that this
only engrains a particular philosophy/approach that may not be fit in the long run. It just
seems (to me) that this "carrot-on-a-stick model" of podling/top-level is outdated much like
our modern education system (just take the classes, get the grades, give the teacher an apple,
and get the hell out of here).
> Again -- just shootin' ideas. I have no bee-in-the-bonet or axe-to-grind. I've just become
interested in how your minds tick...

So, I am a huge fan of collecting metrics, trying to squeeze wisdom out
of them, and making community decisions based on what's likely to
succeed. The trouble is, past performance is not a guarantee - or even a
reliable indicator - of future performance, because there are so many
variables to consider.

So, yes, we should do this, but we should avoid trusting it completely,
because it is known to fail.

We can cite numerous examples of deeply dysfunctional, hostile,
unhealthy communities that are HUGELY successful. Several come to mind

We can also cite friendly, welcoming, well-managed communities that are
unable to achieve any measurable success in terms of actual user adoption.

In each of these case, the metrics are useful, and interesting, and
worthy of study, and even suggest decisions that should/could be made.
But they are misleading unless a human is there to interpret and
implement them.

I'd love to see more things like Black Duck and Bitergia, and see them
being open sourced like some of the work that Roberto Galoppini has
worked on, and see more intelligence and less shot-in-the-dark
understanding coming out of them. If there is wisdom to be gained, and
things that can be consistently reproducible, we should pursue that. So
much in open source, however, depends on personalities, and we tend to
attract some of the more ... ahem ... interesting personalities in the

Rich Bowen - - @rbowen - @apachecon

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message