The most recent lecture presented a calculator implementation (for a calculator with four functions) in Scheme. This calculator is a simple REPL interpreter: it reads a user input, evaluates it, prints the output, and loops back again.
What follows is a summary of the lecture with my takeaways, including a description of the program (and why it’s worth considering), the original Scheme program from the lecture, and my translation of it into Clojure.
There are two main reasons to look at this:
+
, /
, -
, and *
. But it’s Scheme-like in its notation ((+ 6 7)
) and does composition of functions as Scheme does.Original Scheme code:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
|
And my translation into Clojure:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 |
|
The key to handling deep lists can be seen in (map calc-eval (cdr exp))
. It is sort of a recursive call, but not a exactly a recursive call, because there’s no open parenthesis in front of calc-eval
. Instead, calc-eval
is an argument to map
; map
will typically call calc-eval
more than once (for each sub-expression). So it’s not just a simple recursive call, but a multi-way recursive call, which is the secret of dealing with deep lists.
For deep lists, we make a recursive call for each element of the top level list, and then for each element of sub-lists, and so on all the way down. The base case is an empty list (or when the expression isn’t a list).
There are three pieces to an interpreter (and this goes for any interpreter, not just Scheme or Clojure):
read
turns things in parentheses into pairs. calc-eval
takes an expression as its argument and returns (and prints) the value of that expression.(eval exp)
returns the value of the expression(apply function arglist)
returns the value returned by the function. This is where the example is different from a full interpreter: the actual Scheme interpreter handles first-class procedures, whereas this calculator example depends on the name of the function (must be one of +
, /
, -
, and *
.In Scheme, there are basically four types of expressions:
In an interpreter, an evaluator’s job is to take the stuff that is typed in and figure out what it means. This requires figuring out what the notation means. Scheme and Clojure (or any Lisp) make this easier, because a complete expression is one list; the language was designed in order to be able to evaluate its own programs. Compare this to Java, for example, where there are many different notations that are not uniform, so what you can put in one context is different than another context.
Lispians say “at the heart of every programming language there’s a lisp interpreter trying to get out”, because you have to evaluate expressions that are procedure calls. Syntax doesn’t get in the way.
A few differences between a full interpreter and this example can be seen in the following:
calc-eval
: we will not see a recursive call for (car exp)
in calc-eval
; (car exp)
has to be one of the 4 math symbols, but in a real Scheme interpreter it could be a symbol that’s the name of a procedure, or it could be a lambda or procedure call or any number of other things to provide the function (so it would need to be evaluated).calc-apply
: this function takes fn
and args
as its arguments, where args
is always a list. This is not just a simplification, but an actual difference from a full interpreter: we don’t have procedures as first-class values, so fn argument is a symbol, not the procedure itself. This means that the calculator cannot handle all procedures (like sqrt
, etc.)There are a number of properties of a programming language that determine what it is to be a program in that language. For example, Scheme has first-class procedures, applicative order, variables. All of these properties manifest themselves in the interpreter; we can look at the interpreter and ask “how would I change this interpreter if I wanted Scheme, but with normal order instead of applicative order?” In that case, don’t call (map calc-eval (cdr exp))
, but just use (cdr exp)
. Then we’d be giving apply actual argument expressions rather than argument values.
Syntax is the technical term for the form of a program, what the program looks like. The Scheme function syntax is (procedure arg arg arg)
. Semantics is what that thing means. For example, (procedure arg arg arg)
means “call that procedure with these argument values after you’ve recursively evaluated the argument expressions”. There are differences across languages, but we see more or less the same kinds of things in the semantics—conditionals, loops, call functions, define variables—while syntax can be very different across languages.
An important point about the calculator: we are actually dealing with two different programming languages. The calculator is a program written in Scheme, but the language that the calculator implements is a programming language that isn’t Scheme. For example, there are no variables in this calculator programming language. When Scheme interpreters are written in Scheme, there are also two languages involved (and it’s more difficult to see the differences than Scheme vs. calculator language).
Notice that eval
lives in both the syntax and semantics worlds. When it takes an expression (syntax) and returns a value, it turns syntax into semantics by turning the form into something meaningful. Meanwhile apply
doesn’t know anything about syntax. It takes a procedure and argument values, so it’s entirely about semantics.
The read
and print
functions are primitive procedures that are not functional. read
is not functional because, every time you call it with the same arguments, you get a different answer. print
is not functional because it changes the state of the world. Functions just compute and return values. Even though Scheme is a functional programming language, the Scheme interpreter itself is not an entirely functional program. Most of it (eval
and apply
) is functional, since it just takes arguments and returns values.
Clearly Scheme and Clojure are both Lisps. It’s nice to see that the Clojure implementation is as consise as Scheme (it’s four more lines, but that could be eliminated if we didn’t put the four cond
s on their own line). There are a few syntax differences (Clojure reduce
instead of Scheme accumulate
, read-line
instead of read
), but the semantics are identical.
One final syntax difference: In Scheme (and Common Lisp and most other Lisp dialects), cons
is a primitive data structure made up of a pair. In Clojure, this is not the case. We can see the cons
in the Scheme example when we use car
(to access the first element) and cdr
(to access the rest). Clojure does have a cons
function, but it works differently:
1 2 3 4 5 6 7 |
|
This is really a subject for further reading, but suffice it to say that Clojure is a Lisp with some differences from other Lisps, including the fact that “cons
, first
and rest
manipulate sequence abstractions, not concrete cons cells”.
I’m not even joking! Go to this URL http://clojure.github.io/clojure/clojure.core-api.html and start reading from top to bottom. If you did not read through that page, you may not know about amap. If you stopped reading before you get to ‘f’ you wouldn’t know about frequencies. However, if you read all the way through, you will be rewarded with knowledge about vary-meta.
In the process of doing this, I was struck by all the bit-
functions — of the ~600 vars and functions in clojure.core
, there are 12 specifically for bitwise operations.
A quick overview on binary:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 |
|
Some things to note:
0010
and binary 10
both represent decimal 2
.1111
is a 4-bit integer (so we can represent up to 15 different values). A 32-bit integer maxes at 11111111111111111111111111111111
(so we can represent up to 4,294,967,295 values). We’ll see that we don’t always use every position counting up from 0 (the left-most position will be used to determine positive/negative).2r....
and Integer/toBinaryString
to convert between binary and decimal. Other functions and formatters are discussed here.This brings us to the bitwise operators. There are 12 in clojure.core
(all visible in the source):
bit-and
– Bitwise and
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
bit-or
– Bitwise or
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
bit-xor
– Bitwise exclusive or
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
bit-not
– Bitwise complement
1 2 3 4 5 6 |
|
bit-and-not
– Bitwise and with complement
1 2 3 4 5 6 7 8 9 |
|
bit-clear
– Clear bit at index n
1 2 3 4 5 6 7 8 9 10 |
|
bit-flip
– Flip bit at index n
1 2 3 4 5 6 7 8 9 10 |
|
bit-set
– Set bit at index n
1 2 3 4 |
|
bit-test
– Test bit at index n
1 2 3 4 |
|
bit-shift-left
– Bitwise shift left
1 2 3 4 5 6 |
|
bit-shift-right
– Bitwise shift right
1 2 3 4 5 6 |
|
unsigned-bit-shift-right
– Bitwise shift right, without sign-extension
1 2 3 4 5 6 7 8 |
|
Negative numbers can be represented using signed integers, where the left-most position represents whether the integer is positive or negative. In one implementation (sign-magnitude notation), the left-most bit is the sign, and the remaining bits are the value.
This means that in an n
-bit integer (e.g., 4-bit), the left-most bit signifies positive/negative, so the remaining n-1
(3) bits can hold the actual value. There are 16 possible values for a 4-bit integer (0000
to 1111
), meaning that a signed 4-bit integer can go from -7 to +7 (it is 16 possible values because we can represent both -0
and +0
, as binary 1000
and 0000
).
Two’s Complement is a slightly more complicated scheme for signed integers (though not overly so), and the one more commonly used. A leading 1
still signifies a negative integer. The difference is that it’s not simply sign (1st bit) and magnitude (remaining bits), because the magnitude is determined using the complement (hence the name).
Up until now, all the code samples have been unsigned integers, which is why all the values are positive (and all the examples with a leading 1 haven’t been negative). Assuming 8-bit signed integers, we interpret things as follows:
1 2 3 4 5 6 7 |
|
I alluded to this in the previous section: all the code samples have been unsigned integers. We’ve been exploring conversions between decimal and binary using Clojure’s radix-based entry format (e.g., 2r1010
), which uses Java Longs and Java’s Integer/toBinaryString
, which returns strings.
1 2 3 4 |
|
This means there appear to be inconsistencies when dealing with signed integers:
1 2 3 4 |
|
I say there appear to be inconsistencies, because this is the expected result of using Long. A full discussion gets into the intricacies of language design and primitives, as exemplified by threads like this and this.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
|
A basic implementation is easy enough:
1
|
|
R.sortBy
sorts according to a given function, in this case R.prop
(where 'silver'
could be substituted for any other property).
To ensure the order (ascending vs. descending), we can introduce R.comparator
:
1 2 |
|
How can we handle tiebreakers? That is, as in the example abolve, what if two elements in the array have identical gold
values and we attempt to sort by gold
— which should be sorted first? We can ensure a deterministic result with predictable tiebreaks using comparators and R.either
.
1 2 3 4 |
|
Finally, what if we need more than one tiebreaker? How do we handle objects that have identical gold
AND silver
values? R.either
expects two arguments, so the solution is to create a variadic implementation of R.either
, one that will accept an unknown number of arguments, so we can pass tiebreaker comparators for all possible situations:
1 2 3 4 5 6 7 8 |
|
The crux of this solution is variadicEither
, a variadic re-implementation of R.either
that can accept a variable number of arguments. It uses head
(first argument) and ...tail
(all remaining arguments) to reduce over all arguments and return a function that addresses all tiebreak possibilities. R.sort expects a comparator function, which R.either
and variadicEither
both return.
Of course this solution still has a bit of boilerplate (repetition of R.comparator(...)
). For a reusable sortByProps
implementation that takes an array of props and a list, see this Ramda.js recipe that I recently added.
While deploying, I ran into some issues with my knexfile. That is, I was able to create the database using the Heroku CLI, but running the migrations and configuring the database connection took a bit of finessing.
Long story short, two parts:
Find your database url. Using Heroku CLI, running heroku config
(or heroku config --app [app name]
) will return something like the following:
1 2 3 |
|
Copy and paste the DATABASE_URL as your Knexfile’s connection
value.
Heroku requires SSL for PostgreSQL connections. There are two options (and potentially a third in the future):
ssl: true
key/value pair to the Knexfileheroku config:set PGSSLMODE=require
'?ssl=true'
query parameter to your database URL (knexfile’s connection
).With those two things in mind, a Knexfile like the following will work just fine:
1 2 3 4 5 6 7 8 9 |
|
react-faux-dom
(on Github) over my previous post’s suggestion.
TL;DR, Hear it straight from the lib author: Oliver Caldwell wrote this blog post about react-faux-dom
, which enables a cleanly organized and powerful combination of React and D3.
That post in four bullet points:
react-faux-dom
makes a fake DOM to support D3. It might seem silly, but it enables us to support D3 while remaining within React.(Note: regarding the second bullet, this post from the React docs is worth a reread.)
Using a fake DOM means we can drop D3 scripts into a React component’s render()
function and it’ll just work. It was trivial to prove out in a production PR:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 |
|
Rendering a sparkline is as simple as <Sparkline width={500} height={500} max={10} data={[1, 3, 2, 5, 4]} interpolation={"basis"} />
. We get the benefits of React semantics AND the D3 API, both neatly organized in their respective places.
I consider it a clear win to maintain React component organization while leveraging the power of all that D3 offers, but I suppose what it comes down to is this:
For me, it's about worrying about the right "lines" to draw in your app, then fill the few shapes those lines create with garbage and ship.
— Ryan Florence (@ryanflorence) February 24, 2016
So many code design decisions boil down to the border between things. The interface. The “line” between where React component code belongs and where D3 code belongs. Ultimately, this still leaves us to fill in the lines with whatever we choose to write, but this library’s placement of the “line” is an improvement over anything else I’ve seen.
As the author writes, “All [React and D3] concepts remain the same, react-faux-dom is just the glue in the middle.” This clean separation is hugely helpful in writing dataviz React components with D3.
]]>At my day job, we’d had a colors.css
file for a while, where we define all the hex codes for our color scheme, as defined by our designers. It looked something like this:
1 2 3 4 5 |
|
This enabled us to use the same colors in any of our other CSS files using CSS modules:
1 2 3 4 5 6 |
|
Straightforward, keeps things DRY, makes it easy to change colors when it strikes the designer’s fancy, etc.
For a long time, while our CSS colors were nicely organized, our JS colors weren’t. We have colors in our D3 visualizations and our inline styles on React components. As a simple improvement, I decided to pull all our colors into a single map that could be read by both our CSS and JS files. NB: this post is about CSS colors, but can apply to any CSS variables you’d like shared to JS.
PostCSS does a lot of things. I’ll leave it as an exercise to the reader to explore the various plugins (or just some of the most popular ones).
For my purposes, I needed postcss-loader (a loader for Webpack), postcss-cssnext (enables the latest CSS syntax, which we were already using through cssnext), postcss-url, and postcss-import.
The change was effectively X steps (X easy steps to a single list of color variables!):
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 |
|
1 2 3 4 5 |
|
1 2 |
|
1 2 3 4 5 6 7 8 9 10 |
|
1 2 3 4 5 6 |
|
1 2 3 4 5 |
|
And that’s it! Now, in addition to everything it already did, running webpack-dev-server
will (1) compile using PostCSS, (2) read from colors.js, and (3) set all colors in colors.js as global CSS variables.
The one limitation is hot-reloading. That is, hot reloading works perfectly on changes to JavaScript files and CSS files, with one exception: colors.js. Since colors.js is read on build, we need to restart the webpack dev server anytime we change or add a color variable. This question poses effectively the same issue (“…every time I change a variable I have to restart the webpack dev server”). For now, that’s a tradeoff I can live with.
This new pattern enables much more inline styling with JavaScript. That is, now our React components and D3 visualizations can, in theory, read style variables from JavaScript and never know about CSS.
Following this to its extreme of no-CSS/all-JS may seem crazy, but I remain curious. A lot has been said about how inline styles with JavaScript may be the future. At a minimum, it’s convenient and fun to do more JS and less CSS. I’m excited to see how the community experiments with inline styling and if there come to be best practices around separation of concerns.
]]>First of all, a special thanks to the organizers and speakers. This was a very well run conference with some high-class talks. From breakfast Monday through to the closing reception on Tuesday, with the single exception of jackhammer noises during some of the talks (what are you gonna do about construction next door?), everything was very well done.
Moving on, I learned a ton, got to know some awesome members of this community, and met some incredible people who’ve influenced my career (by giving talks, authoring open-source, or otherwise helping me write better code). Here’s a couple of my main takeaways, in no particular order.
I came into the conference with React.js and Redux experience, but little to no knowledge of GraphQL, Relay, or React Native. I was not disappointed then, that the majority (maybe two-thirds?) of the talks addressed exactly those things. For a long time, React Native has been on my list of new tech to explore, as someone who’s never written anything for mobile. GraphQL and Relay, meanwhile, could be directly applicable to my everyday work. And I’m of course always pleased to learn new things about what I already “know”, like aspects of React performance that I haven’t thought about.
If the majority of the talks addressed React.js, React Native, GraphQL, and Relay, the remainder focused on areas of tech that I rarely if ever consider. Talks covered subjects like virtual reality, hardware, and graphics. I may never focus the majority of my time on any of these, but it’s eye-opening and motivating to see people pushing the limits of what can be done.
A few of my favorite talks:
@
) mentions, so I totally resonated with his walkthrough of the problem and implementation, and I’m excited to give Draft.js a try.Any post-conference list would be incomplete without mentioning the things I’m excited to explore and implement:
As a mental exercise, a friend proposed the following potential interview question: given a directory with 10,000 files of text, how would you extract all the phone numbers from that directory into a single file?
My immediate thought: this would be a basic assessment of someone’s knowledge of:
I even knew how I’d implement it: use node’s filesystem module to read the files, parse them for regex matches, and write all matches to a new file.
I was intrigued enough that I decided to prove it out. I wrote a basic phone number regex by hand (\d{3}(-|\s|\.)?\d{3}(-|\s|\.)?\d{4}\
(for 3 digits, 3 digits, and 4 digits separated by hyphens, periods, spaces, or nothing), and looked into popular phone number regexes. I realized how unfamiliar I am with Node’s filesystem module (readdir
and readFile
and writeFile
). Then I got curious about publishing npm packages. Before I knew it, I’d spent a couple hours and produced a somewhat polished npm project for this hypothetical task.
And it was all wrong.
The thought process I used was logical. My work as a software engineer focuses almost entirely on the web, JavaScript, build tools, UI features, HTTP servers. I’m comfortable with databases, front- and back-end code, version control, and countless other things. But that’s a small subset of software! Classic hammer/nail.
To Wikipedia:
Software: any set of instructions that directs a computer to perform specific tasks or operations.
Software is about problem-solving. But problems can’t be solved well without being understood. And they won’t be well understood if we assume we should use the same solution every time. There’s something to be said for using the tools you know, but software also requires a humility to recognize when a given tool is the wrong one.
In this case, I skipped the step of analyzing the problem. I didn’t think about the specifics of the problem, the tradeoffs of time, or the alternative solutions I could choose. This was a one-time, approximate task. It was unlikely to be repeated often enough to make automating worthwhile. And yet I instinctually went with what I knew, implementing a “good”, “complete” solution that was really just a picture of overengineering.
For a problem like this, why use another language or abstraction when it can be done via the command line, the text interface for the computer itself and a much more direct interface with the filesystem? Why use a language like JavaScript that’s best suited for the web or pull in Node just for the sake of using a tool I know?
These are questions I won’t soon forget to ask myself when I take on a new problem. Hopefully that’ll prevent me from falling into traps that webcomics are made of. I know for sure that, next time I’m presented with a problem of finding text within a filesystem, I’ll remember that tools like grep
were made for exactly that. A simpler, less time-instensive, and more appropriate solution.
egrep "\b[[:digit:]]{3}(-|\s|.)?[[:digit:]]{3}(-|\s|.)?[[:digit:]]{4}\b" ./* > ./nums.txt
What is JavaScript anyway? Some words:
If you’re like me (or Philip Roberts, it seems), these words themselves don’t mean a ton. So let’s parse that out.
JavaScript runtimes (like V8) have a heap (memory allocation) and stack (execution contexts). But they don’t have setTimeout
, the DOM, etc. Those are web APIs in the browser.
JavaScript in the browser has:
setTimeout
onClick
, onLoad
, onDone
JavaScript is single-threaded, meaning it has a single call stack, meaning it can do one thing at a time. The call stack is basically a data structure which records where in the program we are. If we step into a function, we push something onto the stack. If we return from a function, we pop off the top of the stack.
When our program throws an error, we see the call stack in the console. We see the state of the stack (which functions have been called) when that error happened.
An important question that this relates to: what happens when things are slow? In other words, blocking. Blocking doesn’t have a strict definition; really it’s just things that are slow. console.log
isn’t slow, but while
loops from 1 to 1,000,000,000, image processing, or network requests are slow. Those things that are slow and on the stack are blocking.
Since JS is single-threaded, we make a network request and have to wait until it’s done. This is a problem in the browser—while we wait on a request, the browser is blocked (can’t click things, submit forms, etc.). The solution is asynchronous callbacks.
It’s a lie that JavaScript can only do one thing at a time. It’s true: JavaScript the runtime can only do one thing at a time. It can’t make an ajax request while doing other code. It can’t do a setTimeout
while doing other code. But we can do things concurrently, because the browser is more than the runtime (remember the grainy image above).
The stack can put things into web APIs, which (when done) push callbacks onto task queue, and then…the event loop. Finally we get to the event loop. It’s the simplest little piece in this equation, and it has one very simple job. Look at the stack and look at the task queue; if the stack is empty, it takes the first thing off of the queue and pushes it onto the stack (back in JS land, back inside V8).
Philip built an awesome tool to visualize all of this, called Loupe. It’s a tool that can visualize the JavaScript runtime at runtime.
Let’s use it to look at a simple example: logging a few things to the console, with one console.log
happening asynchronously in a setTimeout
.
What’s actually happening here? Let’s go through it:
console.log('Hi');
function, so it’s pushed onto the call stack.console.log('Hi');
returns, so it’s popped off the top of the stack.setTimeout
function, so it’s pushed onto the call stack.setTimeout
is part of the web API, so the web API handles that and times out the 2 seconds.console.log('Everybody')
function, pushing it onto the stack.console.log('Everybody')
returns, so it’s popped off the stack.console.log('Everybody')
returns, so it’s popped off the call stack.An interesting aside: setTimeout(function(...), 0)
. setTimeout
with 0 isn’t necessarily intuitive, except when considered in the context of call stack and event loop. It basically defers something until the stack is clear.
To this back to something more surface level, something we deal with every day, let’s consider rendering. The browser is constrained by what we’re doing in JavaScript. It would like to repaint the screen every 16.6ms (or 60 frames/second). But it can’t actually do a render if there’s code on the stack.
As Philip says,
When people say “don’t block the event loop”, this is exactly what they’re talking about. Don’t put slow code on the stack because, when you do that, the browser can’t do what it needs to do, like create a nice fluid UI.
So, for example, scroll handlers trigger a lot and can cause a lagging UI. Incidentally, this is the clearest explanation I’ve heard of debouncing, which is exactly what you need to do to prevent blocking the event loop (that is, let’s only do those slow events every X times the scroll handler triggers).
In summary, that’s what the heck the event loop is. Philip’s talk helped me understand a lot of what JavaScript is, what it isn’t, which parts of it are runtime vs. browser, and how to use it effectively. Give the talk a watch!
]]>These moments are some of my favorites, opportunities to focus on the important/non-urgent tasks like performance improvements, refactors, new technologies, and code cleanup. So I spent the last few hours of the day on that last item: pruning old code from the codebase.
We all know that pruning is about removing the superfluous, which is by definition a good thing (“superfluous” being “unnecessary”, after all), but the benefits of pruning also include:
Source: “Pruning”, Wikipedia
Ok, ok, we’re not talking about plant health, risk of falling branches, or yield of flowers and fruits. But it’s a pretty straightforward metaphor for code.
The pull request I ended up submitting did three things (removed, removed, removed, like pruning):
/** @jsx React.DOM */
pragma, which has been unnecessary since React 0.12 (we’re currently running 0.13).Immutable
as a global variable (we’re using Immutable JS on most—but not all—pages, and we want to explicitly require libraries for each file/component, plus…yeah).These are simple things. Remove unnecessary lines of code. Clarify what a given file is doing by making modules more explicit. Reduce the number of dependencies and the weight of the codebase.
These are clearly beneficial things. Deleting the unnecessary reduces mental overhead. Explicit requires ease our ability to reason about a piece of code. Removing an external library improves performance.
Simple, beneficial, and yet when do these things get accomplished? As I referenced above, rarely. It was only after feature development, testing, and bugfixing that I even considered it. To some degree that’s on me: it’s a technical discipline like performance or code quality that needs to be considered at each step along the way. But it’s also on the development process and management: if it’s not prioritized and time isn’t allotted, it won’t happen! That simple.
Anyway, that’s my argument for code pruning. Regardless of whether anyone else finds it valid, it’s a personal goal of mine to spend a couple hours a week on exactly that. Removing dependencies, deleting dead code, refactoring. And who knows, perhaps with a little more disciplined code pruning along the way, code quality will improve and our team will have fewer weeks of urgent bugfixes.
]]>Unfortunately, no definitive answers. Or perhaps too many “definitive” answers. So here’s where I stand.
Let me back up for a second. I went through a similar exercise a little while back. Google even pointed me to this handy article, which I found so useful that I summarized it in a StackOverflow answer.
I was pleasantly surprised by the three upvotes that came over the next few months, but of course all pleasant things must come to an end:
As with frighteningly many answers to this question, your example of ‘declarative’ programming is an example of functional programming. The semantics of ‘map’ are ‘apply this function to the elements of the array in order’. You’re not allowing the runtime any leeway in the order of execution.
Silly me, thinking I’d grasped a complex but fundamental concept, and sillier me, thinking I should share it. On the internet.
Thus began my quest to answer those three questions. (1) What is the difference between imperative and declarative? (2) Can I safely call functional programming a subset of declarative programming? (3) If so, what’s an example of declarative programming that isn’t functional?
I first tried to make sense of the critique I’d received on StackOverflow: “…your example of ‘declarative’ programming is an example of functional programming.” I suppose that’s what rendered my answer inappropriate for the question “What is the difference between declarative and imperative programming?”. Easy solution—just look back to the accepted answer…and find that the same commenter left a very similar critique.
This seemed to imply that functional programming and declarative programming are not synonymous (which I already believed), but also that there’s a significant difference between declarative programming and functional programming (which I hadn’t recognized). So what is it? Going back even farther, if this is a valid critique, what’s the difference between imperative and declarative to begin with? I decided to reassess my answer.
Imperative Programming:
Declarative Programming:
Takeaways:
Here’s where we already see some divergence in definition. Minimizing vs eliminating opens us up to an ambiguity. Does declarative programming necessitate absolutely no side effects, or just a preference for immutability? Is imperative programming about the lowest level of specified instructions, or just less abstraction?
With these thoughts in mind, I’d argue that declarative and imperative are relative terms, not absolute black and white, but a spectrum of lighter and darker gray. Generalizing, declarative programming is about specifying the “what”, by way of certain abstractions (for example, functional operations like map
) or by removing abstractions (for example, plain HTML rather than jQuery DOM manipulation), whereas declarative programming is about the “how”, using lower level steps (for example, for
loops).
So I stand by my original sentiments, and my original answer on StackOverflow. If we think of imperative and declarative as a spectrum rather than a binary either-or, we can use it as an effective part of our vocabulary rather than a confusing semantic argument.
That’s where I’m at anyway. I hope you don’t find it “frighteningly” off-target.
]]>What does this add up to? I’ve been thinking a lot about performance, particularly front end performance of web apps. In this post, I’ll summarize some takeaways from AEA regarding performance, particularly as they relate to integrating performance into planning and design.
Summarizing a few points from Yesenia’s talk, consider this project plan, one that we’ve all presumed to follow:
But then realize that, due to biz dev requirements, delays in research, and longer-than-expected design time, all while keeping the same sprint/client/deployment deadline, the actual project plan looks like this:
We’ve all seen it! And it probably happens more often this way than the ideal case. What’s the definition of insanity again?
Now I could speculate and hypothesize about the underlying issues of failed project management, but I’ll leave that to someone else. I want to focus on performance. A question: with that development timeline in mind, when do we think about performance? During the software development segment, right?
But why? We developers think we can implement user functionality, meet the design specs, and still have time left over to optimize at the end. But that never happens. Could we not consider performance at the beginning of the process?
We need to think about performance as a design feature. We need to think about performance early. We need to prioritize page speed and load times just as much as UX and beautiful interfaces. But how?
I’m not sure we’re being cheap or smart by thinking about performance this way. We don’t have any budget at all! A theme throughout AEA, and not just in Yesenia’s and Lara’s talks, was setting a performance budget.
Yesenia explained that a performance budget is both “a performance goal used to guide design & development” and “a tangible way to talk about performance.” How well do those conversations about performance-intensive features usually go? For developers, it’s often either “no, we can’t do that” or a resigned “ok, I guess we have to do it.”
What if, instead of a win-lose scenario between designers and devs, the conversation was framed around a budget? We talk about the inherent tradeoffs between technologies, so why not consider performance tradeoffs in discussions about design?
“But how do I go about setting a performance budget?”, you might ask. Yesenia and Lara made some suggestions:
I’m still working on this, on how to practically establish a performance budget on the job. I don’t have access to my competitors’ product, so for now I can only set the goal as arbitrarily better performance.
One thing I will say: everything I’ve said in this blogpost has to be a culture change. Stealing from Lara’s talk, it can’t be all on one individual. You have to establish a culture of performance.
"There should be no performance cops or janitors. It's not sustainable. You need a culture of performance." @lara_hogan #aeaaus
— Chris EK (@cek_io) October 6, 2015
]]>Before I continue, some thanks are in order. AEA made three scholarships available to Flatiron School alumni. Now, in retrospect, I realize the value of attending is well worth the $1,000+ price tag, but the reality is: I wouldn’t have been able to justify that cost on my own. So some major thank yous for making this happen, first to AEA for offering those scholarships, and then to Flatiron School for enabling that hookup. Thank you!
Moving on, I want to share some of my notes and highlights from the three days. For a quick summary of the conference in the form of 140-character highlights, #aeaaus is a great place to start. A summary of my takeaways from the conference (as well as my full notes) are below.
My full unfiltered/unedited notes can be viewed here, and videos of all talks will be posted online at some point, so you should sign up here. For now, these are my highlights/summaries of each talk, some of which may turn into full posts.
once
function:
Great, once
“accepts a function fn and returns a function that guards invocation of fn such that fn can only ever be called once, no matter how many times the returned function is invoked.” Getting back to the original question (i.e., in an interview), this would be one kind of answer. It would show knowledge of the JavaScript ecosystem, some of its libraries (and why to use them), and how to apply it to a specific problem.
That said, let’s go deeper—how would we implement once
from scratch? Since Ramda’s implementation worked so well for us, let’s look no further than Ramda. Looking at the source, it’s relatively straightforward to see what’s going on:
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
Let’s ignore _curry1
for now (though we’ll get to it), and rewrite as follows:
And there we have it! So what’s actually happening here? The first several lines are simple: we declare the variables called
(initialized to false
) and result
, return result
if called
is true, otherwise set called
to true and then assign result = fn.apply(this, arguments);
.
What is that line doing? It’s using apply()
, which “calls a function with a given this value and arguments provided as an array”. It’s a way of dealing with scope, making sure we pass the right value of this
to fn
. In our example above (console.log...
), this isn’t an issue, so we could plausibly replace the line in question with result = fn(arguments);
.
It is an issue, however, when scope and this
matter. For example, using Ramda’s example of wrapping an addOne
function (var addOneOnce = R.once(function(x){ return x + 1; });
) using once
, we can see that not using apply()
(left) breaks the adding behavior, but it works when using apply()
(right).
This occurs (on the left) because without passing the correct value for this
, x
in the addOne
function becomes "[object Arguments]"
, which, when 1 is added, becomes "[object Arguments]1"
. On the other hand (on the right), given the correct value for this
, x
becomes 10 (or whatever argument we pass) and the result is correct.
And that about concludes this post, with one open question remaining: currying? Ramda’s implementation of once
uses curry1
, in keeping with its API (functions first, data last) and functional style. Currying is just a way of turning a function that expects n parameters into one that, when supplied less than n parameters, returns a new function awaiting the remaining parameters. It’s a handy way that Ramda enables us to build functions, pass those functions around as first-class objects, and call when ready. Back to our once
examples, currying is what’s happening when we call once(addOne)
and see function anonymous()
. once(addOne)
expects one more parameter, so we call once(addOne)(10)
and get 11.
Imagine the entirety of your organization’s chatroom communications. Imagine making sense of those communications in a single interactive visualization, one that factors in date and time, chatroom name, individual participants’ names, and message content.
I recently implemented just such a feature. While something like this of course requires back end analytics, aggregations of data, and “data science” that can handle such “big data,” it also relates to user interface (UI), the subject of this blog post.
Until recently, this app’s client-side UI was built entirely in Ember.js, a framework intended for “ambitious” applications (and thus a good fit!). Over time, however, the UI team came to realize some of Ember’s limitations, some of those conventions and patterns inherent to the framework that—rather than making developers’ lives easier, as is any framework’s aim—posed challenges to the organization and maintenance of our codebase.
Enter React.js, a UI library that solely addresses issues in the view layer. Over the last 5-6 months, we have been porting Ember code over to React, started using React for all greenfield components, and made React the standard for our UI. This blog post won’t cover the litany of (fiercely debated) pros and cons of Ember vs. React, but suffice it to say that React has made us on the product development team unanimously happier.
All of that is just background to the feature I initially described, because a data visualization isn’t implemented solely in Ember or React. Or is it?
The short answer is no. To effectively create data visualizations, we have leveraged D3.js, a JavaScript library for manipulating documents. D3 functions similarly to jQuery in that it emphasizes selectors and listeners; for example, to initialize a D3 svg, we might write d3.select('body').append('svg') #...
and, from there, append rectangles and lines, bind click and hover actions, etc. Not so different from a basic jQuery application ($('button').on('click', function()...)
).
That said, what D3 ultimately produces is a series of DOM elements, specifically SVG elements. Some basic D3 code might look like:
1 2 3 4 5 6 7 8 9 10 11 |
|
That code then maps to SVG elements in the DOM, looking something like this:
1 2 3 4 5 |
|
There are multiple ways to wire D3 up to a given web framework, but it’s ultimately a script that runs to build the component in the DOM. Our old pattern was loosely the following:
didInsertElement
hook in the component, run the D3 script that selects body and appends SVGUntil recently, we had been able to maintain and reuse our Ember D3 components, but this chat timelines visualization required a brand new D3 component, one we decided to write in React.
My initial instinct, as with simpler React components, was to render the component with properties and run the D3 script in React’s render
or componentDidMount
hook. What became clear, however, was that we didn’t need to run the D3 script at all. In place of d3.select(...).append(...)
we could simply build up svg elements in the render
hook.
This approach, while going against my initial instinct of using D3’s pattern, aligns well with React’s strengths of one-way data flow and components that are easier to reason about than traditional data binding. It’s a declarative approach that expresses what it does, as opposed to an imperative approach that expresses how it’s done. And it has benefits of composibility and extensibility—rather than selecting and appending as additional design specs come in, we can componentize everything—bars, axes, labels, plots—to reuse later or modify with greater control.
And that earlier question about data visualizations being written entirely in a framework? Considered this way, we can construct the SVG elements directly in React, something like this:
1 2 3 4 5 6 7 8 9 10 |
|
You can pretty quickly see how the inner rectangles could be pulled out as components of their own, as could axes, labels, etc. We’ve found this pattern to be much easier to reason about when building visualizations in our UI. So here’s to rethinking UI patterns and, as a result, writing code that’s easier to reason through.
The “problem” already had a solution of sorts, in the form of a short jQuery script a coworker had pulled together:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
|
A simple solution that, when run in the browser console, would log the offset. I just wanted to make it even simpler by removing that one extra step. That meant setting up a Chrome extension, which was a far easier process than I thought it might be.
manifest.json
file, with a name
, manifest_version
of 2, a version
, manifest.json
under browser_action.default_icon
. By now, manifest.json
should look something like this:
{
"name": "Hello World!",
"manifest_version": 2,
"version": "1.0",
"description": "My first Chrome extension.",
"browser_action": {
"default_icon": "icon.png"
}
}
Google’s documentation is pretty clear and helpful
Lifehacker has a decent intro tutorial.
My commitment to blog is also part of a larger goal: to further establish myself as a software engineer, which means (among other things like improving my technical skills) better understanding the industry. Recognizing my ignorance about computer science and its history, I put Walter Isaacson’s “The Innovators” on my Christmas wishlist.
I’m only two chapters in, but I’m already struck by two major things: (1) the main thesis arguing that innovation is more attributable to collaboration than to singular genius and (2) the convergence of technological advances in the year 1937, all of which accelerated towards the modern computer.
Isaacson writes:
New approaches, technologies, and theories began to emerge in 1937… It would become an annus mirabiliis of the computer age, and the result would be the triumph of four properties, somewhat interrelated, that would define modern computing.
He defines those four computing properties as digital, binary, electronic, and general purpose, and he summarizes the following key individuals and their contributions in (and around) 1937:
Person(s) | Contribution | |
---|---|---|
Alan Turing (Cambridge/Princeton) | Published paper of mathematical theory (addressing Hilbert’s Entscheidungsproblem), with the byproduct of the conceptual “Logical Computing Machine”, which became known as the Turing Machine (“It is possible to invent a single machine which can be used to compute any computable sequence”). | |
Claude Shannon and George Stibitz (Bell Labs) | Shannon figured out that electrical circuits could execute Boolean logical operations using an arrangement of on-off switches. Stibitz built the Complex Number Calculator which, based on Shannon’s insight, showed the potential of circuit relays to do binary math, process information, and handle logical procedures. | |
Howard Aiken (Harvard) | Began plans on the Mark I which, when completed in 1944 under IBM and the Navy, was fully automatic—it could run for days without human intervention—as well as digital (though non-binary and partially mechanical). | |
Konrad Zuse (Berlin) | Finished a calculator prototype, the Z1, that was binary and could read instructions from a punched tape (though mechanical). This ultimately gave way to the Z3, which was the first fully working all-purpose, programmable digital computer. | |
John Vincent Atanasoff (Ames, Iowa) | Conceived the first partly electronic digital computer, which solved linear equations. It used mechanically rotating cylinders to replenish electrical charges in condensers and thus maintain memory. Atanasoff was also (disputedly) the inspiration for John Mauchly’s work. | |
John Mauchly and J. Presper Eckert (UPenn, 1940s) | With funding from the US War Department, built ENIAC (the Electronic Numerical Integrator and Computer). ENIAC was digital computer using the decimal (not binary) system, which could handle conditional branching and subroutines. |
Incredible how much took place—or was at least initiated—over the course of a single year, and telling that the contributions that persisted were ones of collaboration, whether as partnerships of key individuals (e.g., Turing, Max Newman, and Alonzo Church) or in settings with collaborative resources (e.g., Bell Labs and major universities). Without diminishing their significance, those lone innovators were ultimately unable to mark history in the same way (e.g., Atanasoff, whose prototype was forgotten and dismantled, or Zuse, whose work with a single college friend was interrupted and lost when he was pulled into engineering airplanes for the German military).
As we head into this new year, I can only hope 2015 will be half as innovative as 1937, and that I can apply those lessons of collaboration to my blog and my continued growth as a developer.
]]>How does the Gemfile work? A quick refresher:
bundle install
, a Gemfile.lock is generated, and our dependencies are taken care of. Right?
bundle install
?Quoting the bundler documentation:
Install the gems specified in your Gemfile. If this is the first time you run bundle install (and a Gemfile.lock does not exist), bundler will fetch all remote sources, resolve dependencies and install all needed gems.
If a Gemfile.lock does exist, and you have not updated your Gemfile, bundler will fetch all remote sources, but use the dependencies specified in the Gemfile.lock instead of resolving dependencies.
If a Gemfile.lock does exist, and you have updated your Gemfile, bundler will use the dependencies in the Gemfile.lock for all gems that you did not update, but will re-resolve the dependencies of gems that you did update.
No surprises here. This fits with the general understanding of Bundler and Gemfiles. But keep this in mind as you continue below, since the resolving of dependencies may mean more than you realize.
Imagine this situation:
You run bundle update cucumber-rails
, thinking it will only update cucumber-rails
. In fact, this actually updates not just cucumber-rails
, but all of its dependencies as well, which will explode in your face when one of these dependencies release a new version with breaking API changes. This happens all too often.
Lest you think I’m all alone in this, know that I’m pulling the above example from this post from Makandra Cards, and the idea in general from more experienced developers than myself. The author of the post suggests three options for conservative gem updates, the first of which is to make changes directly to Gemfile.lock.
Crazy, right? Controversial, even? Perhaps not. To date, Bundler has not acknowledged this issue, but there’s a significant use case (edge case, perhaps) that calls for editing Gemfile.lock. Just do it conservatively. Everything in moderation.
]]>Notes: The quotes below represent some of the key statements (as I judge them) in order of their appearance in Crockford’s second talk on JavaScript. Read together, they outline the main trajectory of Crokford’s presentation, but they are not intended to replace the entirety of the talk.
JavaScript: The Bad Parts
“Since I discovered that the language had good parts, that sort of implies that it must have had bad parts. Why would anybody design a language with bad parts? How would that come about? In my review of all the bad parts in the language, it mostly comes from three causes. The first is legacy. In copying the Java syntax, JavaScript also copied some bad things about Java, so many of the worst features in JavaScript are actually things it inherited from Java, which it inherited from C, which it inherited from FORTRAN. So there’s a long line of sin-age which affects us today.
“There were some good intentions in the language that didn’t quite work out. Things were added, like semi-colon insertion and implied global variables, with the intention of making the language easier to use for beginners. In fact, it worked, because it turns out that if you have absolutely no idea what you’re doing in the language you can still generally make things work. Unfortunately, those things work against professional programmers trying to do large, sophisticated programs, so there are some trade-offs there that didn’t work out well for us.
“But the biggest influence, by far, was haste. The language was designed, implemented, and shipped in way too little time. Most languages take years to develop – for example, Smalltalk was eight years from Alan Kay’s first prototype to Smalltalk 80, when it was first made available to the public. That’s a good timeframe for a programming language, because you want to go through it and test it, make sure that it works, and refine it in order to make sure that it’s meeting its goals. JavaScript was prepared in about as many days.” (Link)
JavaScript: The Good Parts
“The good news is that, for the most part, the bad parts can be avoided. And if you avoid the bad parts, and if you work just with what’s left over, the good parts, there’s actually a brilliant language there. The features that were selected and the way that they were put together is astonishingly good. It’s a language of amazing expressive power. JavaScript is a language that most people don’t bother to learn before they use. You can’t do that with any other language, and you shouldn’t want to, and you shouldn’t do that with this language either. Programming is a serious business, and you should have good knowledge about what you’re doing, but most people feel that they ought to be able to program in this language without any knowledge at all, and it still works. It’s because the language has enormous expressive power, and that’s not by accident. There’s actually some brilliant design in there.
“The problem with the bad parts isn’t that they’re useless, it’s that they’re dangerous. I see a lot of wannabe ninjas out there who are going through the bad parts and going ‘oh, I found a new use for with, or another thing you can do with eval,’ or some other edge case. Stop doing that. Stop doing that!” (Link)
Object-Oriented JavaScript
“This language is all about objects; it’s an object oriented language. I’ll try to demonstrate to you that it is more object oriented than Java. For a long time, a lot of the opinion about this language was that it’s not object oriented, it’s object based, it’s deficient. It turns out it’s actually a superior language.
“In this language, an object is a dynamic collection of properties. This is quite different than in most of the other object oriented languages in which an object is an instance of a class, where a class has some state and behavior. Objects in this system are much more dynamic. So it’s a collection of properties, and each property has a keystring which is unique within that object. If you add two properties with the same name, the second one will replace the first one.” (Link)
JavaScript accessor property (getter/setter)
“Here’s an example of using an accessor property. The difference between an accessor property and a data property is that an accessor property uses get and/or set. Here I’m defining a property for my object called inch. When I try to get inch, my_object, ’Inch’, I will receive the result of dividing this.mm by 25.4. If I try to set it, I won’t actually set this property, instead I will set millimeter to whatever value I pass times 25.4. So the result of this is that I can have an object with two properties in it that are linked in an interesting constraint way. I can set either the millimeters or the inch and it will appear to fix the other one, so I can keep those two things in sync. There are a lot of really interesting patterns that can be done with these. There are even more evil patterns that can be done with this.
“For example, one of the assumptions that you’ve always had in the language was that you can go to an object and retrieve a property and there’s no transfer of control, you’re just getting some data. Now you’re giving control over to a function which you hope will give it back, but it might not. But it can also mutate the object while it’s getting the thing, so something that used to be a read-only event is now potentially a mutating event which could mutate this object or who-knows-what in the thing. So there are all sorts of really abusive patterns that can be made out of these getters and setters, and I recommend to all the ninjas: don’t get stupid with stuff, because it’s going to be really, really easy to get stupid with this stuff. I’m telling you, you can get stupid with this stuff, and you don’t need to do it. So be smart with this. Use it sparingly.” (Link)
Example:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
|
Classes vs. Prototypes (prototypes!)
“The most controversial feature of the language is the way it does inheritance, which is radically different than virtually all other modern languages. Most languages use classes – I call them ‘classical languages’ – JavaScript does not. JavaScript is class free. It uses prototypes. For people who are classically trained who look at the language, they go: well, this is deficient. You don’t have classes, how can you get anything done? How can you have any confidence that the structure of your program’s going to work? And they never get past that.
“But it turns out classes as we currently understand them were first formulated in 1967, in Simula. The prototypal school was developed about 20 years later, at Xerox Parc, by people who had intimate knowledge of Smalltalk, which was the first modern semi-popular object oriented programming language. The changes that they made were not made in ignorance; it was very well informed, changing, simplifying, and advancing the programming model. And what they did was they created, in my view, a vast improvement over the model that had come before.
“It’s possible that one demonstration of the greater power of the new thing is that, first off, code is smaller. If you’re writing to the prototypal model and you’re doing it correctly, your programs are a lot smaller. For one thing, you take out a lot of the silly redundancy, like ‘I’m creating a variable of this type named That Type, initialized with new That Type.’ You’re saying everything three times, and you tend not to do that in a prototypal language. But more than that, you can simulate the classical language in the prototypal language. You can’t do the other. Java is not powerful enough that you can write in a JavaScript style in Java; it’s just not good enough. JavaScript is, so you can do it the other way around, because it’s the more powerful of the models.” (Link)
Object.create (don’t use
“I don’t use new anymore. I don’t need it. I’m thinking prototypally now, and when I’m thinking prototypally I can do everything I want to do with object.create. So I see this now as just a vestige; I don’t need it anymore. There’s also a hazard with new, that if you design a constructor that’s supposed to be used with new and either you, or one of your users, forgets to put the new prefix on it, instead of initializing a new object the instructor’s going to be clobbering the global object, damaging global variables and not doing useful work at all, and there’s no compile time warning or runtime warning of that. That’s a feature I don’t need to use.” (Link)new
)
Functions and objects
“The best feature in the language, the good parts, the very best parts, are functions. We’ll talk about them next time. So that’s all the objects. All the values in this language are objects, with two exceptions: null, and undefined.” (Link)
JavaScript and C
“Syntactically, JavaScript is clearly a member of the C family of programming languages. It’s got the curly braces and all of that stuff. It differs from C mainly in its type system, which allows functions to be values.” (Link)
Crockford on JavaScript – Chapter 2: And Then There Was JavaScript [Video]
Crockford on JavaScript – Chapter 2: And Then There Was JavaScript [Full transcript]
Notes: The quotes below represent some of the key statements (as I judge them) in order of their appearance in Crockford’s first and second talks on JavaScript. Read together, they outline the main trajectory of Crokford’s presentation, but they are not intended to replace the entirety of the talk. The first 50 minutes of the first talk, which cover the history of programming before 1970, are excluded from this post, not because they’re unimportant (they are!), but because it was difficult to pull single quotes that represented the content. All emphases mine.
Innovation since the ‘70s
“The other thing we’ve seen is an end to CPU innovation. We used to see a lot of really radical new designs happening all the time, but we don’t see that happening anymore. Basically we’ve got three architectures that we use for most of our stuff: virtually all the computers are on Intel, most of the game platforms are on Power PCs, most of the mobile devices are on ARM, and that’s it. Nobody’s making new stuff, nothing radical, it’s just refinements of stuff that’s been happening for several decades.
“We’re doing even worse in operating systems. It used to be that every model of every machine had its own operating system, and that came with a lot of obvious inefficiency, so we’ve pushed that down and now we have just two: we’ve got Unix which was developed in the ‘70s, and we’ve got Windows that was developed in the ’80s. Of the two, Unix is obviously the better one, but there’s no innovation happening in operating systems. Basically we’ve been rewriting the same systems for 40 years. That’s just not where we do innovation. Where we do innovation is in programming languages, and that’s been going on for quite a long time.” (link)
Leaps
“Software development comes in leaps, and our leaps are much farther apart than the hardware experiences. Moore’s Law lets the hardware leap every two years; we leap more like every twenty years. Again, basically we need a generation to retire before we can get the good new ideas going, so despite the fact that we’re always talking about innovation and how we love innovation and we’re always innovating, we tend to be extremely conservative in the way we adopt new technology.” (Link)
The beginning of JavaScript
“Basically, [Brendan Eich] took these components: he took the syntax of Java, he took the function model of Scheme—which was brilliant, one of the best ideas in the history of programming languages—and he took the prototype objects from Self. He put them together in a really interesting way, really fast; he completed the whole thing in a couple of weeks. It’s a shame that he wasn’t given the freedom that Xerox had to spend a decade to get this right. Instead of ten years it was more like ten days, and that was it. I challenge any language designer to come up with a brand new design from scratch in ten days and then release it to the world and call it done and see what happens with that.
“One of the consequences of it was that there are parts of it that are just awful. If they’d had more time they probably would have recognized that and fixed it, but they didn’t. Netscape was not a company that had time to get it right, which is why there’s no longer a Netscape.
“But despite that, there is absolutely deep profound brilliance in this language, and this language is succeeding in places where many other languages have failed because of that brilliance; it’s not accidental that JavaScript has become the most popular programming language in the world.” (Link)
A great time to be a programmer
“One thing that’s different now than in the ‘50s and ’60s is there are lot of computers out there, and there are a lot of people writing programs now. It’s possible to get a community of people even if you have a minor language, enough to do useful things, to do a lot of group work. You’ve got a group large enough to justify writing books, which was something we didn’t have back in the ’50s and ’60s. So I think this is a great time to be a programmer. We have lots of choices, and we need to be smart about making those choices and be open to accepting the new ideas, because there are a lot of new ideas out there that we shouldn’t be rejecting just because they’re unfamiliar and we don’t see the need for them. There are actually a lot of good ideas in all of these languages, not least of which is JavaScript…” (Link)
Mythology of innovation
“Now, if you were here last time you’ll remember I went through the history of everything that ever happened, starting with The Big Bang, going through The Dawn of Man, and then finally there was JavaScript. The reason I did that was because understanding the context in which this stuff happens is really important to understanding what we have now. Without that understanding you’re consumed by mythology which has no truth in it, that the history of innovation has been one thing after another where the new, good thing always displaces the old stuff. That’s not how it works, generally. Generally the most important new innovations are received with contempt and horror and are accepted very slowly, if ever. That’s an important bit of knowledge to have, in the case of JavaScript.” (Link)
JavaScript has good parts
“Having that background [understanding history and innovation] allowed me to make the first important discovery of the 21st century, which was that JavaScript has good parts. This was an unexpected discovery, and when I tried to share it with the rest of the community there was a huge amount of skepticism; a lot of people refused to believe it was possible that JavaScript had any redeeming value whatsoever. In fact, it has very, very good parts. But I’m getting a little ahead of the story, so let’s back up a little bit.” (Link)
Why it’s called JavaScript (Netscape-Sun history)
“It was very clear at the time that there was a lot of excitement about Java and the Netscape browser, and Sun and Netscape decided they needed to work together against Microsoft because if they didn’t join forces Microsoft would play them off against each other and they’d both lose. The biggest point of contention in that arrangement was what to do with LiveScript. Sun’s position was: “Well, we’ll put Java into the Netscape browser, we’ll kill LiveScript, and that’ll be that.” And Netscape said no, that they really believed in the HyperCard-like functionality, and they wanted a simpler programming model in order to capture a much larger group of programmers.
“So there was an impasse, and the relationship almost broke up, when I think Marc Andreessen—and I have been able to document this, but people have told me—Marc Andreessen, maybe as a joke, suggested: ‘let’s change the name to JavaScript.’ And it worked! Except that Sun claimed ownership of the trademark. Even though they had nothing to do with the language and they tried to kill the language, they said ‘we own the trademark, but we’ll give you a license to use the trademark’. Netscape said ‘great, an exclusive license, only we can call it JavaScript, that’s fine’.” (Link)
The destruction of Microsoft
“At Microsoft they’d been watching this with some alarm, particularly when folks at Netscape were saying that Netscape Navigator was going destroy Microsoft. Microsoft said ‘oh, we don’t want to be destroyed’. It turned out Netscape Navigator didn’t destroy Microsoft. In fact, the software that is going to destroy Microsoft is Windows Mobile.” (Link)
JavaScript naming confusion
“What should we call the language? There’s a lot of confusion. Some people still think that JavaScript, JScript, and ECMAScript are three different languages, and that’s not the case. It’s three silly names for one silly language. JavaScript isn’t actually an open name, which is surprising in that this is the language of the world’s biggest open system. It’s a trademark now of Oracle, and we don’t know what they’re going to do with that. We probably should call it ECMAScript, except it’s such an awful thing to call it.”
Crockford on JavaScript – Volume 1: The Early Years [Video]
Crockford on JavaScript – Volume 1: The Early Years [Full transcript]
Crockford on JavaScript – Chapter 2: And Then There Was JavaScript [Video]
Crockford on JavaScript – Chapter 2: And Then There Was JavaScript [Full transcript]