The Journal of Joel McCracken

The journeys of a fascinated programmer

JavaScript Could Have Been Much, Much Worse

There are many JavaScript misconceptions floating around. I have long hesitated to get into the discussion, but I think its time to start. I have been programming in JavaScript for many years now, and have gotten to know the language quite well.

JavaScript has flaws. Besides its flaws, it contains some strange ideas that seem very bizarre to many programmers from the common C/Java background. I get that.

However, Most of those idiosynracies (e.g. parseInt, lack of a rich standard library) don’t cause problems in practice. JavaScript has a large ecosystem of libraries that get around these flaws.

Unfamiliar is not the same as bad. The unfamiliar components to JavaScript (e.g. closures and prototypes) actually come from a very interesting theoretical background. They are very useful and powerful, if you take the time to learn the concepts.

Really, how I personally feel about JavaScript doesn’t matter. Web developers must know JavaScript. It is the programming language of the web. Love it or hate it, you must use it.

I think this is the reason many programmers hate JavaScript. Most of the time, developers don’t set out to “learn JavaScript”. They want to make their website do something interesting. Nobody likes to be forced to do anything. So, when they come across these unfamiliar things, they get frustrated, and the language seems unnessarily bad.

The fact is, though, that JavaScript’s rise to importance language is an historical accident. Almost any programming language designed for the web could have had similar success. The fact that JavaScript is a pretty nice language had little to do with its success. Web developers commonly go through great lengths to make their websites work where they need to. We make do with whatever the browsers give us.

A number of years ago, Microsoft was pushing its own competitor to JavaScript named VBScript. Based on Visual Basic syntax, it was familiar to the many Microsoft programmers who already know Visual basic. In the days before C#, this was actually a large percentage of programmers. For a short time, VBScript was fairly popular. Some pundits wondered if Microsoft was going to succeed in displacing JavaScript with VBScript. After all, at the time, most new websites were being built for Internet Explorer.

Fortunately, VBScript did not catch on in any significant way. I don’t know why it didn’t. In the days of IE6, Internet Explorer was essentially the only browser on the internet, and Microsoft had great power over the web. My guess is that Microsoft didn’t see VBScript as being essential to its success, and so it did not push the matter. If things had worked out differently, today we would be creating languages that compile to VBScript, not JavaScript.

Personally, I am happy with JavaScript. There are things I would change about it, but it is pretty great, all things considered. JavaScript could have been much, much worse.

Name Emacs Daemons With the ‘–daemon=’ Option

Emacs has a great “daemonization” feature which allows the user to connect to a currently-running Emacs instance (“the server”) in a “client”. The running client looks and feels just like a regular Emacs instance.

Creating an Emacs daemon is straightforward. The emacs --daemon command will create a new emacs daemon instance. Running the command server-start from within emacs will ‘daemonize’ the current emacs instance, allowing new emacsclients to connect to it.

Multiple daemon instances can be run, each with a unique name. A client may then specify which emacs server to connect to via the -s <servername> option. That way, you can have as many Emacs instances running as you want, and connect to them freely.

However, naming Emacs daemon instances is not straightforward. There is a variable, server-name, which controls what the server will be named, as long as it is set to that at the time of daemonization. So, launching a new Emacs instance from the command line was really awkward. The easiest method I had found is something like:

emacs -e '(setq server-name "my-special-server")' --daemon

This code sets the server-name variable before daemonization starts. This works, but is awkward. You also need to deal with quoting the code, which is also awkward.

One day, it struck me: what if --daemon takes a name as an argument, but this just isn’t documented anywhere?

As it turns out, it does. The above may be accomplished by the following, which is much more attractive:

emacs --daemon=my-special-server

Suddenly, launching new Emacs servers is much easier.

Exporting and Re-Encrypting Passwords From LastPass

As part of my ongoing process to address my personal technical debt, I have been trying to figure out how to handle my passwords. See, I have not maintained good personal password policies. To help get control of that, I’ve been using LastPass.

Unfortunately, LastPass is not free software. I use closed source software like most other practically-minded people, but I think of it as a risky choice, and as its own form of technical debt. There is something troubling about replacing one form of technical debt with another that I don’t like.

That being said, I am trying to be more practical and less ideological, whenever possible. Good enough is good enough. Better is better. Perfection doesn’t exist. Thus, I think LastPass is a reasonably good way to tackle my password problems.

However, one requirement I have is to be able to export my lastpass passwords. I don’t want to be dependent upon LastPass for my entire online life.

I was able to hack together a nice little script to do this for me. Basically, it:

  1. Prompts the user for LastPass authentication data. Passwords are read via IO#noecho so your passwords won’t be visible on the console.

  2. Contacts LastPass and downloads the password database. The LastPass ruby gem makes this easy.

  3. Prompts the user for a password to encrypt the downloaded lastpass data with.

  4. Uses the gpg command to create a password-encrypted database.

The code is available on Github. It requires the lastpass gem to be installed, along with gpg.

Example Usage:

bash-3.2$ lastpass-backup ~/Dropbox/foo.gpg
Lastpass Email:
Lastpass Password:
Connecting to lastpass
GPG Passphrase to encrypt export:
You access the export by running:
    gpg -d /Users/joel/Dropbox/foo.gpg
And entering the GPG passphrase you used.

Accessing the database:

bash-3.2$ gpg -d ~/Dropbox/foo.gpg
gpg: CAST5 encrypted data
gpg: encrypted with 1 passphrase
...

Why I Use Bash

I rely on Bash for much of my scripting needs. I really don’t like it, though. So why don’t I use something like Ruby instead?

There are a few reasons I use Bash. It works very well as a “bootstrap language”. Bash is good at setting up an environment and running complicated commands. Problems with Bash come up when you need to do anything slightly complicated, but for simple tasks it works very well.

Bash is a de-facto standard. It is available by default on all the nix operating systems I care about. My goal is to develop a stable, repeatable, and testable computing environment, and for this, Bash works. It can be relied upon as a platform to launch other software.

Bash is standard among hackers. Shell examples are given in Bash script. I can expect another programmer to be able to reasonably comprehend and modify a Bash script. In this way, Bash transcends cultural differences among programmers. I can use Bash skills on any type of project. Knowing Bash, then, is also extremely valuable.

I would love to use something else. Awk seems like it would be a very good choice, but I don’t know it very well yet.

Ruby could be a good choice here, but I don’t believe it currently is. Before I could ever rely on Ruby as a bootstrap language, I need to believe a Ruby script on one system will work reliably on another without having to dictate too much about either system’s environment. I need to see binaries available that can be be installed anywhere. These binaries should be able to be placed anywhere on the system and still have everything work. Gems installed on one Ruby should not interfere with gems on any other. Environment variables from one should not interfere with environment variables from another.

Bash works well. I wish there were a better choice for standard systems scripting, but I don’t think there is.

Resources for Learning Bash Scripting

I have an intense love-hate relationship with Bash. It sure feels like achieving reasonable proficiency at Bash has been the hardest and most frustrating thing in my professional career thus far. Yet, it is extremely rewarding. Bash is extremely effective in certain situations. This effectiveness is what has enticed me to keep at it.

There are many things about Bash that make it awkward. Knowing where to look for answers is really invaluable. Most of these actually come from #bash on Freenode. Unfortunately, I cannot recommend the channel itself as I have had bad experiences. Still, these resources are the the clearest and most useful I have found.

  1. The Bash manual is surprisingly complete and clear. It provides a good first reference for variables, substitutions, and other Bash nuances.

  2. The Bash Guide is provides a reasonable general overview of Bash. This is a great orientation to the world of shell scripting.

  3. The Bash FAQ. You will encouter behavior in Bash that is baffling. My first real understanding of the complexities in Bash came from looking at the “I’m trying to put a command in a variable, but the complex cases always fail!” Bash FAQ entry. These entries are great for clearing up confusion. The same goes for the Bash Pitfalls reference.

I hope to add more quality resources in the future as I find them. Hopefully you, dear reader, will have an easier time than I wrangling the beast that is Bash.

Using Custom Rubies in the Shebangs of Executables

I try to automate things. Ruby scripts as Unix executables work well for many automation tasks. They sit sits right at the point of both being simple and powerful. However, the shebangs for such scripts are typically ugly and brittle.

In Unix terms, a shebang is first line of a script which how the script should be interpreted. A typical shebang looks like this:

1
2
3
#!/usr/bin/bash

echo "hello, world"

The first line of the above script – the #!/usr/bin/bash – is the “shebang” we are talking about.

For a Ruby script, a shebang looks more like this:

1
2
3
#!/usr/bin/ruby

puts "hello, world"

On some systems, this is fine. But what if the ruby you would like to use is not at that location on another system? The env command can be used as an additional level of abstraction away from the literal path to the interpreter. The env env command will search your $PATH environment variable for the executable you specify as its first argument. So, the below example would be run with the first ruby that the env command finds in your $PATH:

#!/usr/bin/env ruby

puts "hello, world"

Thus, our script can be run on a computer that has Ruby in a different location, and everything will work well.

However, this doesn’t completely end the complication. Say, for example, our computer has Ruby version 1.9.3 and uses 1.9.3 features, whereas another has 1.8.7.

If we only specify ruby in the script’s shebang, our script won’t work on that other computer.

We really want to be able to declaratively specify the type of interpreter our script requires. We do not care about the location of the interpreter. We just care that it exists and that it provides the features our script requires. For example, we need ‘ruby, but only version 2.0’. Some systems provide us with executables named ‘ruby-’, but not all.

Fortunately, we can build build our own executables and reference them in our shebangs. We can make the names of those executables as descriptive as necessary.

Lets say our system uses RVM, and we want to be able to write scripts that depend upon the fact that they are running in Ruby 2.0. I assume you have ~/bin in your $PATH:

First, Create the file ~/bin/ruby-2.0. Inside it, add the following lines:

1
2
  #!/usr/bin/env bash
  exec rvm ruby-2.0.0-<your patch level here> do ruby "$@"

Then, save the file and mark it as executable.

That’s it! You can now use the ruby-2.0 executable in your scripts. This shows the whole thing, all set up, and how it works together:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
bash-3.2$ which ruby-2.0
/Users/joel/bin/ruby-2.0

bash-3.2$ cat `which ruby-2.0`
#!/usr/bin/env bash
exec rvm ruby-2.0.0-p247 do ruby "$@"

bash-3.2$ cat test.rb
#!/usr/bin/env ruby-2.0

puts RUBY_VERSION

bash-3.2$ ./test.rb
2.0.0

If we wanted to use our ./test.rb script on a system that doesn’t use RVM and has a ruby 2.0 version at /usr/local/ruby-2.0/bin/ruby on that system we can create a ruby-2.0 executable with the following:

1
2
#!/usr/bin/env bash
exec /usr/local/ruby-2.0/bin/ruby "$@"

The two parts of the above scripts that need to be mentioned are exec and "$@". Exec tells the bash script “replace the currently executing script with the following script”. This makes dealing with IO simpler and the ruby-2.0 executable won’t stay around as a useless process for the duration of our script’s execution. The formulation "$@" tells bash to pass all of its arguments along to the next process.

This technique can be adapted to all sorts of applications, not just Ruby. The ability to create your own executables removes a barrier to automating the stuff in your life.

Using Custom Rubies in the Shebangs of Executables

I try to automate things. Ruby scripts as Unix executables work well for many automation tasks. They sit sits right at the point of both being simple and powerful. However, the shebangs for such scripts are typically ugly and brittle.

In Unix terms, a shebang is first line of a script which how the script should be interpreted. A typical shebang looks like this:

1
2
3
#!/usr/bin/bash

echo "hello, world"

The first line of the above script – the #!/usr/bin/bash – is the “shebang” we are talking about.

For a Ruby script, a shebang looks more like this:

1
2
3
#!/usr/bin/ruby

puts "hello, world"

On some systems, this is fine. But what if the ruby you would like to use is not at that location on another system? The env command can be used as an additional level of abstraction away from the literal path to the interpreter. The env env command will search your $PATH environment variable for the executable you specify as its first argument. So, the below example would be run with the first ruby that the env command finds in your $PATH:

#!/usr/bin/env ruby

puts "hello, world"

Thus, our script can be run on a computer that has Ruby in a different location, and everything will work well.

However, this doesn’t completely end the complication. Say, for example, our computer has Ruby version 1.9.3 and uses 1.9.3 features, whereas another has 1.8.7.

If we only specify ruby in the script’s shebang, our script won’t work on that other computer.

We really want to be able to declaratively specify the type of interpreter our script requires. We do not care about the location of the interpreter. We just care that it exists and that it provides the features our script requires. For example, we need ‘ruby, but only version 2.0’. Some systems provide us with executables named ‘ruby-’, but not all.

Fortunately, we can build build our own executables and reference them in our shebangs. We can make the names of those executables as descriptive as necessary.

Lets say our system uses RVM, and we want to be able to write scripts that depend upon the fact that they are running in Ruby 2.0. I assume you have ~/bin in your $PATH:

First, Create the file ~/bin/ruby-2.0. Inside it, add the following lines:

1
2
  #!/usr/bin/env bash
  exec rvm ruby-2.0.0-<your patch level here> do ruby "$@"

Then, save the file and mark it as executable.

That’s it! You can now use the ruby-2.0 executable in your scripts. This shows the whole thing, all set up, and how it works together:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
bash-3.2$ which ruby-2.0
/Users/joel/bin/ruby-2.0

bash-3.2$ cat `which ruby-2.0`
#!/usr/bin/env bash
exec rvm ruby-2.0.0-p247 do ruby "$@"

bash-3.2$ cat test.rb
#!/usr/bin/env ruby-2.0

puts RUBY_VERSION

bash-3.2$ ./test.rb
2.0.0

If we wanted to use our ./test.rb script on a system that doesn’t use RVM and has a ruby 2.0 version at /usr/local/ruby-2.0/bin/ruby on that system we can create a ruby-2.0 executable with the following:

1
2
#!/usr/bin/env bash
exec /usr/local/ruby-2.0/bin/ruby "$@"

The two parts of the above scripts that need to be mentioned are exec and "$@". Exec tells the bash script “replace the currently executing script with the following script”. This makes dealing with IO simpler and the ruby-2.0 executable won’t stay around as a useless process for the duration of our script’s execution. The formulation "$@" tells bash to pass all of its arguments along to the next process.

This technique can be adapted to all sorts of applications, not just Ruby. The ability to create your own executables removes a barrier to automating the stuff in your life.

Advice for Getting Started With Getting Things Done

I started to do Getting Things Done in September of this year, and I love it. It has helped me get rid of a million little things that distract me, and thereby improve my focus upon what is important. I wanted to quickly share some getting started advice. These are the major lessons I have learned as I have been developing my own system.

  • Start implementing the GTD system immediately. Do not wait. Anything is better than nothing. If you don’t own the book, order it right now. In the meantime, do an internet search for “basics of getting things done”, write down a few policies to implement, and start. The “system” rewards time invested in it. Every bit of time you invest helps, and this is especially true over the long run.

  • Buy the book and read it. There is understanding that is essential to the system which the book teaches you. If you do not understand it, your system will not work. Getting Things Done cannot be effective only as bulleted list of principles or flow chart of rules. You must understand the theory to be effective.

  • The GTD system is your system. It’s just a set of principles. How you implement them depends upon you and your current situation.

  • Do not let perfect be the enemy of better. Expect your system to get a out of date and have rough edges. Refactor and iterate.

  • Moreover, there is no such thing as perfect.

  • Pay attention to what feels difficult. This is what you need to work on. This difficulty is valuable for you to improve.

  • Getting started is much harder than maintaining. When you start, you will have an infinite list of projects. The good news: odds are that you will get a bunch of them done quickly, and most of the others will go onto someday/maybe.

  • Do not be afraid to put things onto someday/maybe. For me, it was probably the single most valuable part of the system. Remember that “someday” can mean “I just can’t act on this right now”, be that because of restrictions with time, money, or health.

  • Getting Things Done does not replace your own agency. You are still your own master. The system is merely a set of insights and practices that help lots of people keep their focus on what is important.

  • Finally, try to implement one idea at a time. Don’t try to make huge, drastic changes all at once.

The main point is to keep trying. If something seems a little hard to you, try to think of a way to make it easier. Keep going back to the book, think about the principles, notice things you can do to improve, and keep trying.

Docker and Fixing the Internet

I’ve been hearing about LXC and Docker (hereafter “Docker”) from all around the internet over the past few months. If you pay attention to Linux developments at all, you have probably heart about them both.

A while ago I outlined a solution to fix the internet, which would shift control of the internet to individuals and away from centralization.

Docker makes this project much easier. Installing and running many pieces of software on a system can cause them to interact in problematic ways. Beyond that, the entire system is made more complicated by each new component that is installed.

Instead of having configuration, logs, data, and binaries all on the same system, Docker lets us contain each application in its own machine. That way, each container much easier to understand, debug, and can be consider independent of the rest of the system. Before Docker, we could have simplified the project in the same way by installing each piece of software on a separate machine. Realistically, this would cost way too much.

I am pumped. In my mind, Docker removes the majority of the work blocking us from having such a system today.

Adding Comments to Be Blog

Comments are one of those things that I’ve always felt a little hesitant about, for many reasons. However, I think I’d like to experiment with them. So, thanks to the Disqus people, I’ve added comments.