Last Updated: February 25, 2016
· brandturner

Startup Engineering Week 3

So I'm behind in the class due to having a little thing called a job. But if learning is what I really want to get out of this, who cares. Anyway, on to the show.


I've been using Linux infrequently ever since I had the displeasure of trying to install Slackware and Debian sometime in the mid 2000's. You kids with your Ubuntu have it easy. I really never dived deeply into Bash or zsh, just learned what I needed to know. Oh, how I miss those halcyon days of yore when my life was flush with free time and I could sleep until I was so sick of being in the bed that I had to get out.

Anyway, on with the knowledge. (I assure you, intrepid reader, that my grammar and prose are normally leagues higher than this. But I'm taking these notes while watching an online lecture.)

Anyway, here's some cool command line shit I learned:

echo "Whatup, nigga" >> helloworld.txt

Protip: "Whatup, nigga" is helloworld++. What's that, don't be racist you say? Ninja, please! There, you happy?!

Also, it finally dawned on me what the pipe operator was for. It takes the output of the left command and uses it as an argument/input for the command to the right of the pipe. It takes the stdout of one side and uses it as the stdin of the otherSeems like you can do some pretty powerful shit with that. I should force myself do go on a GUI sabbbatical and run my macbook completely from the command line, save for my browser.

The "e" flag for echo enables interpretation of backslash escapes. I never look at man pages.

Double carets following an echo statement appends the string to a file while a single caret overwrites it.

curl fakeurl 2> output.txt
curl fakeurl 1> googlecurl.txt 2> fakeurl.txt
curl fakeurl &> combined.txt

The name of the text files should be indicative of the output produced

Normally, something like this would make my head spin:

yes | nl | head -2000 | tail -50 > data2.txt

However, I've discovered that I get much more out of screencasts than I do out of reading reference material. Anyway, the above command takes the STDOUT of yes, pipes it into the STDIN of n (which numbers the lines), pipes that formatted data into head (which takes only the first 2000 lines) and pipes that output into tail (which takes the last 50 lines) and writes (not appends) that into data2.txt. The file contains:
1951 y
1952 y
1953 y
2000 y

It's so simple!

rsync > scp
rsync is great for transmitting large files over slow networks and can resume transfers

Behold this sorcery:
wget -w 2 -r -np -k -p

The above command will, with a wait of 2 seconds, recursively download an entire site without ascending to the parent directory and will convert the links to point to local files and get all images, etc to properly display the page. What an easy way to rip a static representation of a site.
wget is like a spider without the google steroids.

cURL is for interacting with single URLs. It's a building block for api calls

curl | sh

This will actually download the file and pipe it through the shell, thereby installing it.

nl filename | tail -1

This handy command will give me the number of lines in a file

wc filename

retrieves the number of lines, words, and bytes in a file

cut -f2 *ptt | tail -n+4 | sort | uniq -c | sort -k1 -rn

I'll probably never be manipulating tabular data from the command line and I don't know what kind of masochist would ever try such a thing. I imagine it's more efficient if you are working with large data sets. Anyhoo, given some tabular data, the above command selects the 2nd column of a file ending in ptt, gets only those starting with the fourth row, sorts the data, does a count of unique values, and sorts the does an inverse sort on the 1st column with a reverse numerical sort....I'm getting to the somewhat useless commands here. I still prefer excel