Compiling your software to run almost anywhere without ever using root
Well... not always, I guess if you still used shared hosting solutions with providers that give you only a FTP account to work with - this is not for you. This pro-tip shares the information which will allow someone, in particular those rootless setups which require such an approach, to use GNU autotools inside the user home folder and rebuild your own local /usr
-like hierarchy of software libraries to use, as in possibly have other software depend and built upon. In particular anyone currently using LD_LIBRARY_PATH
in e.g. ~/.bashrc
or ~/.zshrc
may benefit from this pro-tip. Since we assume no root access, I guess I don't have to warn/advise against compiling packages from source as root (NEVER, EVER, DO, THIS).
Audience
Or the "who this for?". Well, let's see if I can conjure up a nice profile of who might benefit from this pro-tip. You may have already been looking for something in this order, but then very possibly do not know what to look for. Let's see. You have combat hours on the command line terminal *nix environment and, actually have one or more environments that your work on, more or less 'unsupervised' non-root access over a SSH connection. You do not have normal Debian or RPM/Yum package manager access (probably missing the commands altogether) and can't write to either /usr
or /usr/local
. You still want to run your own custom software, how to do it?
A real world example use-case
Hosting providers, such as our Magento dedicated hosting provider, may offer 'unmanaged' non-root access to, assumed jailed/hardened, environments and corporate networks often do. So in the case of our Magento host, they run the web server park (Apache/nginx) and database farm (MySQL/MariaDb slaves) which are optimized for Magento hosting and, in the database case not unimportant, the maintenance of frequent tuning of the whole shebang. But access for users is limited to a web-based control panel which governs the stuff you actually can tweak yourself. This has a SLA and guarantees a certain up-time and recovery-time from, erh mostly inexperienced Magento developers I guess - and you fck up a lot in the beginning I promise you - but the shell (jailed, shared by 10 accounts) machine obviously doesn't have SLA. Which is good, because this is just the thing that allows us to install anything we can manage without actually breaking or cracking the security systems (if you are that elite, good for you) but no SLA just means no support (how to train them in command line operations support? it's not really possible to cover all the terrain). Anyway the main reason they give you this is a 6GB place to host the Magento files of possibly multiple duplicate folders and media and .... your own memcache storage. The restrictions basically just apply to reading/leaking out of things that the other 9 accounts are using in my case.
Disclaimer
A word of warning in advance. The owner of the machine you may try this, could have a corporate use-policy in place that prohibits you from doing this. Please ensure that what you are about to do, compilation of software from source, is actually allowed. Nothing is easier than smuggling in a text file or two, compile them on the machine and you suddenly may have a cracking tool. Hence a good system administrator will have removed any compilers or prevent access to them by regular users, after any provisioning setup has been done (if that uses gcc
, it may be required while setting up machines). Also you should always know your machine, both hard- and software. I won't be held (morally or legally) accountable for any fracked up machines or pissed-off BOFH so act responsibly and don't keep your finger on the trigger if you aren't absolutely sure you want to pull it.
Baseline
This example I will show the commands I used to compile my own zsh
on the provider host since their only offered (ba)sh bores the bazinga's out of me. I miss family functions, the zle, zparseopts and other cool things that go with running the better of the two (zsh > bash
).
$ uname -a
Linux ssh1 3.2.0-4-grsec-amd64 #1 SMP Debian 3.2.53-2grsec6 x86_64 GNU/Linux
Ok so I've confirmed that we are running a grsec
kernel. While I tried earlier to compile zsh
I learned we didn't have ncurses
library installed so I needed to get it. Let's go download and unpack these sources:
$ test -d ~/build || mkdir ~/build; cd $_
$ wget -qO- http://sourceforge.net/projects/zsh/files/zsh/5.0.5/zsh-5.0.5.tar.gz/download | tar xzv
$ wget -qO- http://ftp.gnu.org/pub/gnu/ncurses/ncurses-5.9.tar.gz | tar xzv
I decided to show off a few little tricks just for added bonus, you may or may not want to investigate these if you didn't know these commands.
The normal routine (FAIL)
Normally GNU tools autoconf
and automake
built software, and there are many packages offered this way, will allow someone to do e.g. the following to configure, build/compile and install the software:
$ cd some-package-sources
$ ./configure
$ make && make install
If some essential library (headers) are missing, you'll be informed either during the configuration step or while making the software. In that case, you need to figure out what component is missing and hope for a very flat dependency tree that follows from it or get stuck in dependency hell. This is actually the most of the crap package managers like aptitude
etc. try to deal with for your: automatically detect dependent packages and install them first before the one that depends on them (that and other atomic/race conditions mostly plus, well, they are comfortable to use - archaic stuff like these tools don't have too many people knowing how to work them and then blog about it).
Either way, this attempt won't work mainly because unaltered commands (no arguments to ./configure
for example) will usually try to install software in /usr
and, remember, we don't have write permissions to that folder. Some, more switched-on, development teams actually setup /usr/local
to better support out of the box user-writable folders but often this is only the case for the single user setup or a environment where, perhaps, a development team could be granted partial (e.g. setfacl
) file and folder write permissions to /usr/local
but still many won't.
There is always 1 place writable to us pretty much guaranteed: /tmp
but any system admin worth their money will disallow running compiled executables on them. No. There is a better place to do this, that's why we made the ~/build
folder, not only to download but build as well. But where to install?
Installing to the user home (AGAIN FAIL)
The fantastic $HOME
offers outcome. It is a place which, if you do have terminal access, can often rely to be fully and truly yours in all regards. It is the perfect place to setup our 'mimic' file system hierarchy of missing libraries and packages. Common practice more or less provides the location for us, but this is not mandatory to the folder obviously. You could, but probably shouldn't, name the directories any other way.
$ mkdir ~/local
Perfect. With that in place, we can add our very first argument to the configuration script:
$ ./configure --prefix=$HOME/local
Often enough, I found, it's that simple and a lot of sources will compile just fine this way. However, should you require custom compiled dynamic libraries to link against (like I do with ncurses) then you'll run into problems.
One way I traditionally used to deal with this was by means of affix to the command ./configure
like:
$ env CPPFLAGS='-I/home/myuser/local/include' \
> LDFLAGS='-L/home/myuser/local/lib/' ./configure
Do note that GNU autoconf and autotools take many, many configuration options and documentation may not always be easily found or fully understood. Luckily, most of the environment variables and values/flags are either used automatically or hardly ever used. For your reference the GNU documentation is a useful start probably.
Without going into details on what exactly CPPFLAGS
(C++ compiler flags) and LDFLAGS
(dynamic linker flags) are, take it from me that these may be very important and can be tweaked to match your situation. It is perhaps worth noting that -I
(minus i) will append those paths to the already known location(s) such as e.g. /usr/include
or /usr/local/include
. So the native system locations are not forgotten and may be used to link against.
There is one final Pièce de résistance left. And if you don't know its name, it will be pretty darn hard to Google what you don't know you are actually looking for.
The elusive config.site
(FTW!)
There is a very sweet built-in mechanism for actually finding the environment variables our tools need. It's called a config.site
and is everything you needed and more. Basically it allows you to skip the tedious and time consuming manual configuration normally associated with these commands.
- place a file in the location
share/
folder e.g./home/myuser/local/share
- name that file
config.site
- have its content define your compilation flags
This most elegant of solutions allows for a diversely rich setup to use as baseline. More information can be found here. For one, it allows us to define a compilation flag I've been hinting at: the grsec kernel sometimes required -fPic
flag. ncurses will need it and what it does you can read about here.
I have the following defined in ~/local/share/config.site
CHOST="x86_64-unknown-linux-gnu"
# util-linux and other programs might complain for libraries not in standard location
# this goes for a few header files `<ncurses.h>` and `<term.h>` which many libs/apps use
# the last addition `-I` to `/ncurses` fixes this since a component relies on this it fails all
CPPFLAGS="-I/home/myuser/local/include -I/home/myuser/local/include/ncurses"
LDFLAGS=-L/home/myuser/local/lib
# next line you may find the need to toggle or remove from -fPIC -fstack-protector
# for not everything may appreciate its use (glibc compile may fail for example)
CFLAGS="-march=native -O2 -pipe -fPIC -fstack-protector"
CXXFLAGS="$CFLAGS"
# the -j stands for the number of jobs to simultaneously run, principle is to simply
# set the -j value to the number of cores plus 1, or at least common in Gentoo/Arch.
MAKEFLAGS="-j5"
# foreign function interface library: high level programming interface
# for one, it allows packages to compile without explicit use of pkg-config tool
LIBFFI_CFLAGS=-I/home/myuser/local/lib/libffi-3.1/include
LIBFFI_LIBS="-L/home/myuser/local/lib -lffi"
You may want to take some time to carefully read and study, both these and other environment variables that affect gcc. A whole bunch of them may effectively never be used, but a few are really useful ones to setup properly and it will further tune and optimize your build process. But beware: ensure your provided hardware stats are correct e.g. do not use more -j3
flags than your machine can handle, and using core2
instead of native
may slightly optimize your output, the latter is most often the safest bet you can't go wrong on! You should do well to inform yourself of any culprits that might apply to your situation before doing anything rash and stupid (you have been warned enough). Google the questions you may have - just as a general note - the documentation, references, blog posts and other sources of information become very scarce at some point as you travel deeper in the matrix: from userland to kernel to machine code, the less you'll be able to dig up sources (thanks captain obvious) you might be used to and comfortable with. Including myself: somehow I'll never learn to love Zsh, Apache or GNU manuals but the information generally is there and you'll just have to ask the right questions, search smart and go get that information. Oh - and welcome to the world of mailing lists, usenet and IRC in case you hadn't familiarized yet. You'll see a lot of those.
Now, back to the example case, we should now go and try to build the ncurses terminal library and given the above referenced config.site
example file, we should (once you've studied and adjusted the values to your personal situation) advance:
$ cd ~/build/ncurses-source-folder
$ mkdir build && cd build
$ ./../configure --prefix=$HOME/local
$ make && make install
Thanks to our new found 'magic', we're back to 'clean cut' commands again, even though the actual setup for building the software has some added complexity in the back, the end result in the front is a whole lot more desirable. One thing to keep in mind though: not every package will be so lenient and you might have to resort to more snooping around to figure out any specific long or short options you need to provide. Luckily, the error output is often a good enough hint to solve problems fast enough and very rarely does one actually have to 'go in deep' and change e.g. autoconf, m4, libtool and what-not files. The switches may just only apply to that tool and your own specific setup so it's hard to be specific, but often as well you can include and exclude libraries / modules compiled in with the resulting binaries. Ever had e.g. Vim complain that Ruby (:version
shows these) wasn't included? Compilation switches.
The pattern repeats itself for the zsh source folder. At this point I realized a earlier violation of a best practice: I gave commands to compile straight in the ncurses-source-folder/
(I now corrected this but thought I should share) but it is much better to not build directly in the source. A sub-folder is acceptable though (from what I remember) so as second command in the sequence below, we create the build/
folder and configure + make from there:
$ cd ~/build/zsh-source-folder
$ mkdir build && cd build
$ ./configure --prefix=$HOME/local
$ make && make install
Should anything go wrong and you'd have to start over somehow, just remember:
$ make distclean
The typical standard is make clean
will remove all intermediate files, and make distclean
makes the tree just the way it was when it was un-tarred (or something pretty close) including removing any configure script output. And depending on the package authors, other make commands may be available often: test, uninstall, dist and such.
I think I should provide the example proper closure, that is to say, tie up the latest loose ends because we're not there yet a 100%. At least one more step needed to truly and fully reunite me with an old friend in foreign lands; quite understandably but without root one cannot change the shell (using often chsh
) and modify the /etc/passwd
file so that next login, we can use zsh straight away. Due to these constraints, I hard to resort to what I might call a bash-provided piggyback ride, a jump-start to the whole shebang going. Bash and clib deliver just the mechanism for this 'hack', using the bash auto-loaded files we can actually modify (in our home), being either ~/.profile
(didn't work on my host) or ~/.bash_profile
, which did work:
# ~/.bash_profile
export PATH="$HOME/local/bin:$PATH" # ensure shebang #!/usr/bin/env zsh works
export SHELL=$HOME/local/bin/zsh # set the shell environment variable
exec $SHELL # do not spawn a new sub-shell / fork the process - rather replace altogether
Most of the techniques illustrated in this example should give you enough stuff to work with. Keep in mind, the chain can break on missing dependencies either during configuration or later when making the software. In this case, inspect the output and find the missing command, then Google and see which package this command belongs to and try to find the source code for it. Pretty much all of GNU can be downloaded as source, and a lot of stuff can also be found on GitHub.
Enjoy tinkering and use your new found powers wisely.
p.s I'd love to hear from you. Comments, additions, errors, critics or just plain feedback is appreciated and if anything requires elaboration or clarification, please do ask in the comments.
Written by Rob Jentzema
Related protips
1 Response
Thanks for an excellent article. Before I venture down this path, would like to know if it would work in situations where the remote machine has no compiler in the first place. Tried looking for articles installing gcc without root, but then that needed a compiler as a pre-requisite. Any insights would be helpful.