When I started my PhD, back in 2009, I told my advisor I wanted to work on optimizing dynamic programming languages. A big part of my thesis was going to involve the implementation of a JIT compiler for some dynamic language, and so our discussion rapidly became focused on which language I should be working with. In the end, we ended up choosing JavaScript. It was a good compromise: a widely-used "real-world" programming language, warts and all, that was still small enough for one person to realistically implement a compiler for. The ECMAScript 5 specification was around 250 pages long, and I read the whole thing from cover to cover before I began working on Higgs.
Since then, I feel I've been watching JavaScript go the way of C++, it's becoming a "kitchen sink" language. So many new features have been added that the new ES6 specification document is literally twice the length of the ES5 specification. Worse yet, a year before the ES6 specification was even completed, there was already a laundry list of features scheduled for integration into ES7. They weren't nearly finished with ES6, and they were already planning ES7. There are a number of semantic inconsistencies in JavaScript that need fixing, but the ES6 and ES7 additions do nothing to fix those, they merely add new features (read: complexity) to the language.
Personally, I'm a big fan of simplicity and minimalism in programming language design. I think that smaller languages have the potential to be easier to implement, optimize, teach, debug and understand. The bigger your language, the more semantic warts will pop out and the more behavioral inconsistencies are going to occur between different VM implementations. If JavaScript is really "the assembly language of the web", then why does it need all these high-level features? The logical thing to do would have been to freeze as much of the JS semantics as possible, and focus on improving support for JS as a compiler target. I believe that the answer as to why JS keeps growing is largely design by committee.
Of course I'm biased. I implemented my own JavaScript JIT compiler and the fact is, I'm too busy to keep up with all these new additions. Still, it seems to me that in the web world, nobody takes the time to pause, breathe and think things out for even a moment. Case in point: Mozilla made a lot of noise with asm.js, a standard for compiling native code to JS that was allegedly better than Google's Native Client. I think asm.js is still new enough that developers haven't really had any time to adopt it, it's only been used in tech demos, but Mozilla and Google are already working on WebAssembly, which in all likelyhood will make asm.js irrelevant. Think about that for a second: asm.js, which is still very new (2013, it's only two years old), is already mostly irrelevant, before anyone even had time to adopt it.
WebAssembly is essentially what Brendan Eich told us we didn't really want or need: a bytecode format for the web. A somewhat more neutral platform for all compilers to target. As a compiler implementer, it still seems to me like it's a bit of an unfortunate compromise: a way to retrofit a web-bytecode into JavaScript VMs. It's going to take programs encoded as Abstract Syntax Trees (ASTs) as input, whereas GCC, clang, and other real-world compilers usually generate Control Flow Graphs (CFGs) at the output stage, not ASTs. Forcing compilers to convert CFGs back into ASTs seems like a decision made to simplify the job of WebAssembly VM implementers, at the expense of everyone else.