6 December 2016

BukGet Project Winding Down

After almost 6 years of operation, it is with a sad heart that I wish to inform the Minecraft community that BukGet will be shutting down. With the most recent BukkitDev changes, we would have weeks of work between retrofitting the existing data for the new BukkitDev URL formats and also reverse-engineering the UI changes that BukkitDev has undergone in order for the API to continue to work. As there are only two of us doing any active work to the project, we are both swamped with other things and our day jobs to perform the needed actions.

It’s been a wild ride everyone, and with some folks taking up the reigns for Spigot with SpiGet I feel I can walk away at least knowing that someone is picking up where we have left off.

We’ve learned a lot with BukGet over the years. With 4 major code revisions to keep up with the demand, we’ve learned how to build a stable, geographically dispersed web service with nothing more than what really felt like spit and twine. As the heydays of 1.2+ million hits a day are long past us, We learned a lot on how to cope with that peak demand, even when it meant completely rewriting the application server in Node.JS and migrating the database from SQL to Mongo. We also did all of this while keeping the entire code-base open-source and doing it without charging the community for this public service.

We hope the community looks at the last 5 years of BukGet’s operation and remembers us in a positive light, but like all good things, this one must come to an end.

25 June 2014

BukGet Infrastructure Expansion

In the past we have run into issues with the API simply keeping up the demand, causing us to expand out to a second server some years ago to help distribute the load. More recently however, the architecture that we have been priding ourselves in being simple and efficient bit us in the butt when the US server went off-line for an extended period of time. Unfortunately for us, this was beyond our control, as the hosting provider that was donating us the gear ran into problems and their entire facility went off-line. We scrambled to being up a temporary master so that generations could continue as normal, however we had to develop a better model for handling this.

Our initial thought was to try to take advantage of the situation and get the community to help us raise the capitol to run our own gear, our way. Unfortunately reality set in pretty quickly when our rally cries were either deleted on Reddit, or generally ignored. A few companies however did respond to offer us some more donated gear and gratis hosting in order to keep us up and running. As it seems that this model is the only model that the community wants to support, we eventually conceited defeat on trying to get our own gear and started communicating with some of the various hosting providers that had contacted us.

With this in mind, we started to engage several companies and then quickly narrowed the list down from there. We wanted to still keep the overall environment small so that it’s manageable by our 2-man (lets be honest, David does most of the ops stuff now) operations team. We also made some small, but fundamental changes to how we will be handling the data, generations, etc. as well. As a result, we went from 2 servers to 6, and we will also be looking into making some other changes down the road to help us react quickly and effectively to any future changes.

The first change we did was make the decision to have dedicated generation servers. In the past, all generation was done on the Dallas API server, and while this never cause any issues, we were putting too many eggs in one basket. We also figured that is 1 generation server is good, it might make sense to have a backup server in-case the first one goes down for any reason (like in-case, you know, our provider goes down).

The second change is that we now will have 2 servers in each geographic region. Each server will also be hosted by a different provider. This means if something goes down, that whole region wont be be (hopefully) significantly impacted.

Thirdly, we decided to expand out the MongoDB instances a touch, making sure that the API servers are all identical (and now slaved) to the current active master generator.

Lastly, while the city-based naming convention worked well for us, it often created some confusion. So we are switching up to a state/country model with each region also rolled up into it’s respective regional record. This means that to a large degree, we have added some more granularity to how we are referencing the our systems, and also help developers when troubleshooting issues with their code.

In general our goal here is to try to make the BukGet platform generally more stable for everyone and provide the needed resources to grow. We have already talked internally about starting to support more servers than just Bukkit, however some of these changes were needed first to really bring everything up to speed before we could start working down those paths.

28 March 2014

API1 and API2 Officially Deprecated and Going Off-line

At approximately 1800hrs GMT, API1 and API2 (the legacy APIs) will be brought off-line. As we have been very vocal about this occurring, there should be minimal impact to people that are using the API. In the vast majority of cases, there should be no impact if you are running the latest versions of whatever server manager you happen to be running.
We have worked directly with many of the more popular panels (including McMyAdmin, Multicraft, SpaceBukkit, etc.) to make sure that they had a version running on the current API before we shut down the old ones. If anyone is experiencing issues with their panel, please communicate with your panel/hosting vendor.

Also, as a result of this, we will be switching over to our new GeoDNS system for the servers and adding the Paris server back into the mix. This means that Eurasian clients will talk to the Paris server, and American clients will communicate to the Dallas one. This change is a DNS change and is expected to take about 24-48hrs to propagate to all of the DNS servers.