I love Wikipedia. I love using it, frequently spending many a spare minute (that I don't actually have) simply wandering from one entry to another, learning things I never knew I never knew. I love it, too, as an amazing example of why sharing and openness work. For those who aren't programmers, and who therefore don't grok the evident rightness of the open source methodology, Wikipedia is a great way of explaining how it's done and why it's so good.
So imagining a world without Wikipedia is pretty frightening. Actually, you don't need to imagine it, because apparently it just happened a few hours ago:
As of the time of this writing, Wikipedia has been down and out for around 40 minutes. As of yet, there is no explanation as to why, but Twitter seems to be buzzing about it as well.
According to a comment on that same post:
What happened with the last outage was that a heating issue killed one of the two datacenters (the one in Amsterdam), and the increased pressure from handling all the traffic eventually brought down the servers in Florida too.
That's not the first time this has occurred:
Weaknesses in Wikimedia’s infrastructure were exposed during an outage in March for the main U.S. Wikipedia site. A cooling problem in Wikimedia’s Amsterdam data center led to a heat condition that caused a server shutdown. The initial problem affected European Wikipedia users, but an attempt to “fail over” to the Tampa data center went awry, and the main Wikipedia site was knocked offline.
Clearly, that infrastructure needs some work; thankfully, that's just what Wikimedia has in mind:
The foundation has budgeted $3.27 million to cover the expense of the new facility, on top of the $1.87 million it expects to spend on maintaining the Tampa and Amsterdam data centers. That level of spending is modest by data center standards, but represents a major investment for Wikimedia, which is a non-profit. In February the Wikimedia Foundation received a $2 million grant from Google to help expand its data centers.
The person leading that expansion is Danese Cooper, who has had a long career in computing, working for just about every key company there, it seems:
She then moved to Apple Computer, the first of several jobs in engineering, including six months working at Microsoft. Her job before Sun was at Symantec, working on version 5 of the ACT! personal information manager (ACT! 2000). This was her first practical experience of internally open development methods and Extreme Programming and convinced her of the benefits of open source and open development.
Cooper joined Sun Microsystems in December 2001 where she came to public attention for her work at on promoting open source. At Sun, Cooper created and managed the Open Source Programs Office at Sun from March 1999 until March 2005. She chose the Sun Public License for NetBeans software, helped draft the CDDL for OpenSolaris and worked on the creation of the Sun Industry Standards Source License for TI-RPC, the Joint Copyright Assignment for OpenOffice.org and that program's dual-licensing with the LGPL. She jointly received a Chairman's Award at Sun as part of the team creating the Sun blogspaces at blogs.sun.com and Java.Net.
In March 2005, Cooper changed her employment to Intel, where she worked as Senior Director of Open Source Strategy for Channel Software Operations.
In early 2009, Cooper moved to R language startup REvolution Computing based in New Haven, Connecticut, which she left suddenly in October 2009. She joined the Wikimedia Foundation as Chief Technology Officer in February 2010.
I met up with Cooper last week, and heard about some of her plans for expanding Wikimedia's operations.
The good news is that she is starting from a solid base: Wikipedia runs on GNU/Linux, Apache, Squid and MySQL. The bad news is that – unbelievably – Wikipedia is essentially run out of Florida (plus that troublesome caching site in Amsterdam). For such a globalised project – just think of all the languages supported – to be running like this is, frankly, bonkers – and makes outages like today's almost inevitable.
To begin with, Cooper is creating a new data centre in Virginia (apparently there's lots of handy dark fibre there), as well one on the West coast of the US. Although it would be good to distribute the content even more widely, there are downsides – not least legal ones. Hosting the content in countries like the US and Holland that have relatively liberal laws as far as freedom of speech is concerned (well, at the moment, anyway) means that censorship is less of any issue than it would be if it were hosted in a repressive country like, say, Australia or the UK.
It seems extraordinary that such a critical site in the online ecosystem should hang by such a fragile infrastructural thread, and that only now, ten years after its creation, is it doing something about it. But really it's testimony to its amateur origins – “amateur” in the best sense – and the fact that its resources have been very limited. Now that it has a bit of dosh to address this issue, let's hope that it will allow it to become even more ambitious – and maybe even a little more inclusive (an inclusionist writes....)