January 8, 2013 rsanheim

CODE BASS 2 – Thursday, January 17

Next week we are hosting a very special party at Public Works in San Francisco. We are bringing the whole company to San Francisco for our January summit – come help us celebrate and dance the night away.

Please sign up here!

Who

All 143 GitHub employees and you!

When

Thursday, January 17th from 8PM-2AM.

Where

Public Works
161 Erie St
San Francisco, CA NORTH AMERICA

January 8, 2013 matthewmccullough

Brent Beer is a GitHubber

Today, Brent has joined the Training Team to expand the world's use of and improve their proficiency with Git and GitHub. He brings along a valuable background in Ruby that will add yet another programming language to the Training Team's stable of expertise.

Brent is bracing himself for the epic amount of airline travel this role involves by arming himself with industrial strength multi-vitamins, chocolate chip Cliff Bars, and a Star Alliance frequent flyer card.

Brent Blinking

Brent has already spent several months pairing with Tim and Matthew, teaching our knowledge-thirsty students, and showing his clear passion for teaching in the classroom and online. We're excited to finally have him on our team full-time.

You can look forward to all the new topics Brent will be bringing to both the free and paid classes our team offers. You can find Brent on GitHub at @brntbeer and on Twitter at, symmetrically, @brntbeer.

January 8, 2013 technoweenie

Restore Tidied Pull Requests

We recently started letting you tidy up your Pull Requests. What happens when someone deletes a branch that is still needed? Starting today, you can restore any head branch of a closed Pull Request.

Branch deletion and restoration

January 7, 2013 Kyros

Garen Torikian is a GitHubber

Garen is joining our Documentation team to help us level up all the help docs. He's one of those rare coders who can also write coherently, so we are happy to welcome him to our family.

He credits James Joyce for all his accomplishments, including such stupefying feats as, knowing what the Oxford comma is, being able to hold a semi-decent conversation, and being able to enjoy a fine Irish Whiskey.

You can follow him on Twitter or on GitHub

Garen_small

January 7, 2013 Caged

Introducing Contributions

Today we're happy to release Contributions: a new addition to profile pages that lets you see what everyone has been up to on GitHub.

Popular Repositories

Show off the fancy repositories you've created. Your repositories with the most stars and watchers make it to the top of this list.

Repositories Contributed To

You're making contributions to projects all over GitHub and we want to show everyone what you're doing. Whenever you commit to a project's default branch or the gh-pages branch, open an issue, or propose a Pull Request, we'll count that as a contribution. Repositories are sorted by your recent impact. A commit today is worth more than a commit last week.

This also makes it easier to see what others are working on in your Organization. Any repositories you have in common with the profile you're viewing are shown in this list.

Contributions Calendar

The contributions calendar shows how frequently you've been contributing over the past year. We've had a great time with this internally. We've been annotating our ships, vacations, talks and even graduations! Here's our very own Tim Clem's annotated calendar.

tclem-in-2012

Contribution Activity

Contribution activity is a great way to see what someone has been up to on GitHub. You can see a really concise view of proposed Pull Requests, open issues and commits.

Enjoy!

January 7, 2013 dewski

CES Las Vegas Drinkup - 1/10/13

If you live in Las Vegas or you're going to be in town for CES, join me at The Chandelier on January 10th for drinks on GitHub.

The-Chandelier

When

This coming Thursday, January 10th from 7-9PM.

Where

The Chandelier inside The Cosmopolitan of Las Vegas at 3708 Las Vegas Blvd S, Las Vegas, NV 89109.

January 4, 2013 thedaniel

LA East Side Drinkup - January 9

Los Angeles is a big place, and everyone is really spread out. When you plan an LA event, you have to try to balance the needs of the eastsiders and the westsiders. This is too hard, so how about we just punt on that and throw drinkups in pairs?

golden road

When

This coming Wednesday, the 9th of January.

Where

Let's start with the east side, because that's where I live. Join me, @jch and @bryanveloso at Golden Road Brewery in Glendale - more specifically their private bar in the back. They're located at 5410 West San Fernando Rd. Drinks are on us from 8 to 11.

The next LA drinkup will be on the west side, we promise.

January 3, 2013 mattgraham

Motor City Drinkup 1/9/13

There is lots going on in the Motor City! On January 9th Detroit is welcoming MongoDB Detroit (registration is free) to the city and we want to celebrate great code and great people together with them. Lets start 2013 off by joining together, having a few drinks and sharing some good times.

Come Join @juliamae, @bkeepers, @asenchi and myself at Traffic Jam Detroit starting at 9:00 PM. We're buying the drinks!

Traffic Jam & Snug

The Facts:

Traffic Jam Detroit

January 3, 2013 jnewland

GitHub Pages updated to Jekyll 0.12.0

GitHub Pages, the easiest way to quickly publish beautiful pages for you and your projects, just got an upgrade. We are now using Jekyll 0.12.0, which contains several new features contributed by the community.

Improvements in this latest release include:

Read the changelog for the full scoop.

January 2, 2013 skalnik

Releasing Make Me

A few months ago, GitHub HQ 2.0 got a MakerBot Replicator 2. GitHubbers started printing almost immediately due to the easy setup but having to leave a laptop connected was painful. We quickly learned how to print from the SD card but then people without a way to write SD cards were out of luck.

What we needed was for Hubot to handle talking to the printer for us. We bundled up some open source projects on GitHub, specifically MakerBot's fork of s3g, MakerBot's MiracleGrue, and @sshirokov's stltwalker and put a small API on top. Today, we're releasing that as make-me!

Make-me makes it easy for anyone to print, primarily controlled by your favorite Hubot instance. The HTTP API only allows for a single print at a time and requires a manual unlock to help prevent others printing while another print hasn't been removed from the build platform yet. In addition to this, it uses imagesnap to take pictures via web cam to give you an idea of how the print is going.

We've been using make-me to power all of our 3D printing needs including decorating our office with various prints and making useful trinkets.

What's it look like?

Our setup at the GitHub HQ is still evolving. Right now, it's connected to an old MacBook Air, so we can use the web cam to see how prints are going remotely.

The Office Setup

Setting it up

Once you have your 3D printer plugged into a computer running OS X you can clone make-me and run the bootstrap script:

$ git clone https://github.com/make-me/make-me.git
$ cd make-me
$ script/bootstrap

Usage

You can send STL files directly to the printer via make:

$ make data/jaws

You can pass some options to MiracleGrue, which you can read about in the make-me README.

Make-me ships with an HTTP API via Sinatra, runnable with script/server. It takes advantage of the CLI interface, along with stltwalker, to give you the ability to scale, print multiple STLs, change infill, generate supports, and more. Want to print Mr. Jaws with the default settings?

$ curl -i -d '{"url": ["http://www.thingiverse.com/download:48876"],  \
               "count": 1,                                            \
               "scale": 1.0,                                          \
               "quality": "low",                                      \
               "density": 0.05,                                       \
               "config": "default"}'                                  \
       http://hubot:isalive@localhost:9393/print

You can easily setup the Hubot script to ask Hubot to accomplish these tasks for you:

Talking to Hubot

Getting Involved

Make-me is still rough around the edges. It started out as a quick project to get something working and has evolved many new features from there. If you want to help out check out the issues and send a pull request!

We hope this encourages more folks to dabble with 3D printing and automate some inefficiency.

January 2, 2013 jonrohan

A More Transparent Clipboard Button

Copying long lines of text and shas to your clipboard has been just a click away for a few years now. Today we're putting a new face on that click-to-copy feature, making it easier to integrate with the rest of the site.

Screen Shot 2012-12-31 at 3 11 38 PM

Today we're upgrading all the clipboard buttons to ZeroClipboard.

With ZeroClipboard we can glue the flash object (currently the only reliable way to put data in the clipboard) to any dom element we want, leaving the styling up to us.

Here are some examples:

Screen Shot 2012-12-31 at 3 10 21 PM

Screen Shot 2012-12-31 at 3 10 40 PM

Screen Shot 2012-12-31 at 3 10 55 PM

"Copy and Paste is so Yesterday"

December 26, 2012 imbriaco

Downtime last Saturday

On Saturday, December 22nd we had a significant outage and we want to take the time to explain what happened. This was one of the worst outages in the history of GitHub, and it's not at all acceptable to us. I'm very sorry that it happened and our entire team is working hard to prevent similar problems in the future.

Background

We had a scheduled maintenance window Saturday morning to perform software updates on our aggregation switches. This software update was recommended by our network vendor and was expected to address the problems that we encountered in an earlier outage. We had tested this upgrade on a number of similar devices without incident, so we had a good deal of confidence. Still, performing an update like this is always a risky proposition so we scheduled a maintenance window and had support personnel from our vendor on the phone during the upgrade in case of unforseen problems.

What went wrong?

In our network, each of our access switches, which our servers are connected to, are also connected to a pair of aggregation switches. These aggregation switches are installed in pairs and use a feature called MLAG to appear as a single switch to the access switches for the purposes of link aggregation, spanning tree, and other layer 2 protocols that expect to have a single master device. This allows us to perform maintenance tasks on one aggregation switch without impacting the partner switch or the connectivity for the access switches. We have used this feature successfully many times.

Our plan involved upgrading the aggregation switches one at a time, a process called in-service software upgrade. You upload new software to one switch, configure the switch to reboot on the new version, and issue a reload command. The remaining switch detects that its peer is no longer connected and begins a failover process to take control over the resources that the MLAG pair jointly managed.

We ran into some unexpected snags after the upgrade that caused 20-30 minutes of instability while we attempted to work around them within the maintenance window. Disabling the links between half of the aggregation switches and the access switches allowed us to mitigate the problems while we continued to work with our network vendor to understand the cause of the instability. This wasn't ideal since it compromised our redundancy and only allowed us to operate at half of our uplink capacity, but our traffic was low enough at the time that it didn't pose any real problems. At 1100 PST we made the decision to revert the software update and return to a redundant state at 1300 PST if we did not have a plan for resolving the issues we were experiencing with the new version.

Beginning at 1215 PST, our network vendor began gathering some final forensic information from our switches so that they could attempt to discover the root cause for the issues we'd been seeing. Most of this information gathering was isolated to collecting log files and retrieving the current hardware status of various parts of the switches. As a final step, they wanted to gather the state of one of the agents running on a switch. This involves terminating the process and causing it to write its state in a way that can be analyzed later. Since we were performing this on the switch that had its connections to the access switches disabled they didn't expect there to be any impact. We have performed this type of action, which is very similar to rebooting one switch in the MLAG pair, many times in the past without incident.

This is where things began going poorly. When the agent on one of the switches is terminated, the peer has a 5 second timeout period where it waits to hear from it again. If it does not hear from the peer, but still sees active links between them, it assumes that the other switch is still running but in an inconsistent state. In this situation it is not able to safely takeover the shared resources so it defaults back to behaving as a standalone switch for purposes of link aggregation, spanning-tree, and other layer two protocols.

Normally, this isn't a problem because the switches also watch for the links between peers to go down. When this happens they wait 2 seconds for the link to come back up. If the links do not recover, the switch assumes that its peer has died entirely and performs a stateful takeover of the MLAG resources. This type of takeover does not trigger any layer two changes.

When the agent was terminated on the first switch, the links between peers did not go down since the agent is unable to instruct the hardware to reset the links. They do not reset until the agent restarts and is again able to issue commands to the underlying switching hardware. With unlucky timing and the extra time that is required for the agent to record its running state for analysis, the link remained active long enough for the peer switch to detect a lack of heartbeat messages while still seeing an active link and failover using the more disruptive method.

When this happened it caused a great deal of churn within the network as all of our aggregated links had to be re-established, leader election for spanning-tree had to take place, and all of the links in the network had to go through a spanning-tree reconvergence. This effectively caused all traffic between access switches to be blocked for roughly a minute and a half.

Fileserver Impact

Our fileserver architecture consists of a number of active/passive fileserver pairs which use Pacemaker, Heartbeat and DRBD to manage high-availability. We use DRBD from the active node in each pair to transfer a copy of any data that changes on disk to the standby node in the pair. Heartbeat and Pacemaker work together to help manage this process and to failover in the event of problems on the active node.

With DRBD, it's important to make sure that the data volumes are only actively mounted on one node in the cluster. DRBD helps protect against having the data mounted on both nodes by making the receiving side of the connection read-only. In addition to this, we use a STONITH (Shoot The Other Node In The Head) process to shut power down to the active node before failing over to the standby. We want to be certain that we don't wind up in a "split-brain" situation where data is written to both nodes simultaneously since this could result in potentially unrecoverable data corruption.

When the network froze, many of our fileservers which are intentionally located in different racks for redundancy, exceeded their heartbeat timeouts and decided that they needed to take control of the fileserver resources. They issued STONITH commands to their partner nodes and attempted to take control of resources, however some of those commands were not delivered due to the compromised network. When the network recovered and the cluster messaging between nodes came back, a number of pairs were in a state where both nodes expected to be active for the same resource. This resulted in a race where the nodes terminated one another and we wound up with both nodes stopped for a number of our fileserver pairs.

Once we discovered this had happened, we took a number of steps immediately:

  1. We put GitHub.com into maintenance mode.
  2. We paged the entire operations team to assist with the recovery.
  3. We downgraded both aggregation switches to the previous software version.
  4. We developed a plan to restore service.
  5. We monitored the network for roughly thirty minutes to ensure that it was stable before beginning recovery.

Recovery

When both nodes are stopped in this way it's important that the node that was active before the failure is active again when brought back online, since it has the most up to date view of what the current state of the filesystem should be. In most cases it was straightforward for us to determine which node was the active node when the fileserver pair went down by reviewing our centralized log data. In some cases, though, the log information was inconclusive and we had to boot up one node in the pair without starting the fileserver resources, examine its local log files, and make a determination about which node should be active.

This recovery was a very time consuming process and we made the decision to leave the site in maintenance mode until we had recovered every fileserver pair. That process took over five hours to complete because of how widespread the problem was; we had to restart a large percentage of the the entire GitHub file storage infrastructure, validate that things were working as expected, and make sure that all of the pairs were properly replicating between themselves again. This process, proceeded without incident and we returned the site to service at 20:23 PST.

Where do we go from here?

  1. We worked closely with our network vendor to identify and understand the problems that led to the failure of MLAG to failover in the way that we expected. While it behaved as designed, our vendor plans to revisit the respective timeouts so that more time is given for link failure to be detected to guard against this type of event.
  2. We are postponing any software upgrades to the aggregation network until we have a functional duplicate of our production environment in staging to test against. This work was already underway. In the mean time, we will continue to monitor for the MAC address learning problems that we discussed in our previous report and apply a workaround as necessary.
  3. From now on, we will place our fileservers high availability software into maintenance mode before we perform any network changes, no matter how minor, at the switching level. This allows the servers to continue functioning but will not take any automated failover actions.
  4. The fact that the cluster communication between fileserver nodes relies on any network infrastructure has been a known problem for some time. We're actively working with our hosting provider to address this.
  5. We are reviewing all of our high availability configurations with fresh eyes to make sure that the failover behavior is appropriate.

Summary

I couldn't be more sorry about the downtime and the impact that downtime had on our customers. We always use problems like this as an opportunity for us to improve, and this will be no exception. Thank you for your continued support of GitHub, we are working hard and making significant investments to make sure we live up to the trust you've placed in us.

December 20, 2012 imbriaco

Scheduled Maintenance Windows

As our infrastructure continues to grow and evolve, it's sometimes necessary to perform system maintenance that may cause downtime. We have a number of projects queued up over the coming months to take our infrastructure to the next level, so we are announcing a scheduled maintenance window on Saturday mornings beginning at 0500 Pacific.

We do not intend to perform maintenance every Saturday, and even when we do, most of them will not be disruptive to customers. We are using these windows only in cases where the tasks we're performing have a higher than normal level of risk of impacting the site.

We will always update our status site before we begin and again when we're done. In cases where we expect there to be more than a few minutes of disruption we will also make an announcement on the GitHub Blog by the preceding Friday.

To get things started on the right foot, we will be performing an upgrade of the software on some of our network switches this Saturday during the new maintenance window. We do not expect this to cause any visible disruption.

December 19, 2012 sundaykofax

Sara Pyle is a GitHubber

Sara Pyle is joining the ranks of the supportocats! She comes to us fresh from the world of Library Science (which continues to be a real thing), having worked in IT support for librarians, which is kind of like watching the watchmen.

She has the social savvy of Clay Shirky, surfs the world wide web, and has been an actual webmaster. In fact, she's basically like Julia Stiles in that one episode of Ghostwriter.

She can jam with console cowboys in cyberspace, has read Neuromancer, has experienced newscape, dreamscape, Netscape Navigator, AND cyberpunk.

When she's not rooting for the Oklahoma City Thunder or coddling a grumpy old cat, Sara bakes vegan things1, reads All The Books2, and hangs out with her husband and nieces.

She's fixin' to support y'all.

sarapyle

You can follow her on Twitter or on Goodreads.

1 Not because she's vegan, but because she keeps forgetting to buy eggs.

2 Bonus if those books are graphic novels, intended for teenagers, and/or: fantasy, books about books/general meta, thrillers about serial killers/unsolved crimes, Tudor history, film/pop culture criticism, feature long digressive footnotes.

December 19, 2012 briandoll

The Octoverse in 2012

I am continually blown away by the staggering amount of work happening on GitHub. Every day, our users commit code, open and close issues, and make plans for their software to take over the world. We track all of this activity and make the public data available via our API.

Over half a million individual events happen every day on GitHub. Here's a look into the ever-expanding Octoverse in 2012.

Push It

2012 GitHub Activity

Since the beginning of the year, we've seen a doubling in activity, with pushes alone responsible for over 60% of the events in a given day. On a typical weekday, 10k people sign up for a GitHub account, and our users:

  • push 140GB of new data
  • create 25k repositories and 7k pull requests
  • push to 125k repositories

Best of all:

  • 10k people create their very first repository

We're Growing. Fast.

Looking over the past few years, the amount of people using GitHub is growing at an incredible rate; there are now 2.8MM GitHub users, which represents 133% growth in 2012 alone. Even more impressive is how much those users are doing on GitHub. In that same time period, the overall number of repositories increased 171% to 4.6MM.

Year-over-year user and repository growth

Since software is changing the world, it shouldn't be surprising that it's developed by people from all corners of the globe. While the United States is the most active country on GitHub.com, it accounts for only 28% of our traffic.

The top 10 countries visiting GitHub.com are: United States, Germany, United Kingdom, China, Japan, France, India, Canada, Russia and Brazil. The top 10 cities are: London, San Francisco, New York, Paris, Moscow, Beijing, Berlin, Bangalore, Sydney and Toronto.

Notable OSS in 2012

Stars are a way to keep track of repositories that you find interesting. These projects, all created this year, attracted the most stargazers:

  1. FortAwesome/FontAwesome: The iconic font designed for use with Twitter Bootstrap
  2. textmate/textmate: TextMate is a graphical text editor for OS X 10.7+
  3. meteor/meteor: Meteor, an ultra-simple, database-everywhere, data-on-the-wire, pure-Javascript web framework
  4. saasbook/hw3_rottenpotatoes: A project used in a free Software as a Service course taught through BerkeleyX
  5. ivaynberg/select2: Select2 is a jQuery based replacement for select boxes
  6. jkbr/httpie: HTTPie is a CLI, cURL-like tool for humans
  7. maker/ratchet: Prototype iPhone apps with simple HTML, CSS, and JS components
  8. twitter/bower: A package manager for the web
  9. Kicksend/mailcheck: Email domain spelling suggester
  10. jmechner/Prince-of-Persia-Apple-II: A running-jumping-swordfighting game for the Apple II from 1985-89

It's better to work together than to work alone. By developing software on GitHub, you're making it easy for 2.8MM people to help you out. In the past year, these projects attracted the highest numbers of unique contributors:

  1. mxcl/homebrew: The missing package manager for OS X
  2. rails/rails: Ruby on Rails
  3. CyanogenMod/android_frameworks_base: Android base frameworks
  4. CocoaPods/Specs: CocoaPods (cocoapods.org) specifications
  5. symfony/symfony: The Symfony PHP framework
  6. zendframework/zf2: Zend Framework
  7. openstack/nova: OpenStack Compute (Nova)
  8. saltstack/salt: Central system and configuration manager for infrastructure
  9. TrinityCore/TrinityCore: TrinityCore Open Source MMO Framework
  10. github/hubot-scripts: optional scripts for hubot, a customizable, kegerator-powered life embetterment robot

:heart::boom::camel:

Across commit messages, issues, pull requests, and comments, emoji is a vital part of GitHub's daily workflow. Life, and our products, just wouldn't be the same without it. When we looked at the popular emoji used on weekdays (green) versus those same emoji on weekends (blue), we saw the :fire::fire::fire: is spreading.

During the week, the business of :ship:ing gets done, with :shipit:, :sparkles:, :-1:, and :+1: taking the lead:

Weekday and weekend emoji

The most popular emoji on the weekend paint a different picture; time for a :cocktail: under a :palm_tree::

Weekend emoji

Thank you!

We believe GitHub is the best place to build software, but it wouldn't be the same without you. Thank you for building, sharing and shipping. Thank you for proving that it's better to work together than to work alone.

From the GitHub family to you, thanks. Next year is going to be even more amazing.

Markdown Cheat Sheet

Format Text

Headers

# This is an <h1> tag
## This is an <h2> tag
###### This is an <h6> tag

Text styles

*This text will be italic*
_This will also be italic_
**This text will be bold**
__This will also be bold__

*You **can** combine them*

Lists

Unordered

* Item 1
* Item 2
  * Item 2a
  * Item 2b

Ordered

1. Item 1
2. Item 2
3. Item 3
   * Item 3a
   * Item 3b

Miscellaneous

Images

![GitHub Logo](/images/logo.png)
Format: ![Alt Text](url)

Links

http://github.com - automatic!
[GitHub](http://github.com)

Blockquotes

As Kanye West said:

> We're living the future so
> the present is our past.

Code Examples in Markdown

Syntax highlighting with GFM

```javascript
function fancyAlert(arg) {
  if(arg) {
    $.facebox({div:'#foo'})
  }
}
```

Or, indent your code 4 spaces

Here is a Python code example
without syntax highlighting:

    def foo:
      if not bar:
        return true

Inline code for comments

I think you should use an
`<addr>` element here instead.
Something went wrong with that request. Please try again.