Last Updated: February 25, 2016
· supersymmetry

Add some git powered muscle to your project I

Some of the best features in git (source code management tool) are not always found easily until you are confronted with them. Often enough, it is easy to use a program without fully utilizing its power and you will only know what you have missed much later. Few tools are so essential as version control tools and as such, being built-in by Mr. Linux(s) himself, git is an awesome force to be reckoned with: missing out on the powerful features it holds can become a real handicap.

Organizing your workflow

First in this collection of pro-tips on git, should be the alteration to a [successful branching branching model. You may find the gitflow tool on Github and be advised that, for zsh users, there is a available for a quick and easy:

$ git-flow init

The above will initialize a new git repository under the present working directory with a standardized model for branches. Read more in the article written by Vincent Driessen (nvie) who wrote the tool.

Add upstream support

Nothing is wasted more than precious time one could have spent on coding, to spend it on debugging code. Although definitely educational from a certain perspective of mastering new technologies, it can also quickly become a pain to decipher streams of exception stacks and (memory) dumps, trying to first locate the source of the error (if any output is even remotely helpful to humans) and next trying to fix it. This often involves a deeper understanding of the inner workings of a particular program/library that you may not be familiar with and as such, cannot fully oversee the consequences of such actions you take in the local piece of code you (try to fix). The program might refuse to run or worse, regression errors and exceptions that happen in rare conditions, those special corner-cases, and often security risks involved, are impossible to oversee in advance. Or there might be a future direction the project is evolving to that you need to consider or anticipate on. Or just take the fact that 'testing' and 'integrating' can be extremely hard and time consuming to be honest, who likes to fully test every in-and-out aspect of every program (module) that you like to use?And then there are the kind of people that find joy in feeding programs complex expressions of (meta) characters (read symbols I guess) trying to hack or crack it. For the majority of us out there, regular expression and encoding complex sequences of code isn't the great joy in life (although one of the most valuable things still to master: understanding character encoding and performing regular expressions).

So why go through all that trouble? Of course, you already made the educated decision to include a given project, why maintain it yourself? But simply including vendor scripts, say in the likes of a jQuery will not cut it for you because you will need to modify this code to work with your setup. Tools like jQuery are thus made very generic to work with as much diversity in systems, platforms, environments (like a browser in the case of jQuery) and try to support the whole spectrum of use-cases that people have found over time. One often heard complaint is that many of these tools are 'bloated'. So you may only want to use a certain portion of a tool which, if it wasn't built in a very modular fashion (e.g. using plugins a lot), can also become a real new pain to untangle.

A few, not all, of these complexities are easily solved by git. What you should do is to fork a repository on Github (in your browser) if you want this in your project.

After you have cloned that repository, in your terminal, you make a local copy, either as a stand-alone newly cloned (pull) of your project

$ git clone

or as part of a 'superproject' (more on that later) by utilizing git submodules, say much like

$ git submodule add modules/git/myfork
$ git submodule update --init

It is noteworthy that the second command,  the equivalent of running git submodule init and git submodule update, will only then actually create the local copy of the project. So merely adding the submodule (first line) will make it seem like you have a local folder with everything in it, yet you are unable to use it without first initializing and then updating it.

Next we add upstream support for this project of ours by changing the current directory to that of your new copy (played around a bit with the name to distinguish that this is possible to config as it seems fit):

$ cd modules/git/myfork
$ git remote add upstream

Now that our project knows where the 'upstream' project can be found, we can have any changes made to these source files by dozes, if not hundreds, of different developers around the world, and make them all work for you! Take a moment to rejoicebly celebrate this fact :)

But wait! We aren't fully done yet. One final step is to include a line to your git profile, located more or less by convention at ~/.gitconfig and apply to the [alias] section the line:

pu = !"git fetch origin -v; git fetch upstream -v; git merge upstream/master"

So from now on - in any project - you add remote upstream to, you can type:

$ git pu

And it will first pull from your remote fork, next from the original upstream repository and finally merge any changes that are in the 'delta' of those (diff or difference).

Should merging fail, you will need to resolve these conflicts. If such a thing happens, check out this excellent thread on I always use:

$ git mergetool -y

Which opens up the three-way merge tool meld on my system. You may also want to use the always great vim which has the tool vimdiff available by default.

More in a second part, as I am running low on time here and this tip became a bit longer than intended.