200

Do browsers (IE and Firefox) parse linked javascript files every time the page refreshes?

They can cache the files, so I'm guessing they won't try to download them each time, but as each page is essentially separate, I expect them to tear down any old code and re-parse it.

This is inefficient, although perfectly understandable, but I wonder if modern browsers are clever enough to avoid the parsing step within sites. I'm thinking of cases where a site uses a javascript library, like ExtJS or jQuery, etc.

2
  • 4
    My 2c: I feel the performance benefits of caching parsed Javascript files are too small for this to be a meaningful optimization.
    – Itay Maman
    Jul 8, 2009 at 9:03
  • 2
    From my benchmarks, it might actually matter. For instance jQuery load time is around 30msecs (on a fast desktop machine), of which 20% are only parsing the code into an executable representation, and the rest is executing it, i.e. initializing the jQuery object in this case. If you're on mobile, and you use two or three libraries, this delay could be relevant, as JavaScript execution is blocking, and the page is essentially blank until every JS script is loaded in memory.
    – djjeck
    Feb 15, 2012 at 11:32

6 Answers 6

348
+500

These are the details that I've been able to dig up. It's worth noting first that although JavaScript is usually considered to be interpreted and run on a VM, this isn't really the case with the modern interpreters, which tend to compile the source directly into machine code (with the exception of IE).


Chrome : V8 Engine

V8 has a compilation cache. This stores compiled JavaScript using a hash of the source for up to 5 garbage collections. This means that two identical pieces of source code will share a cache entry in memory regardless of how they were included. This cache is not cleared when pages are reloaded.

Source


Update - 19/03/2015

The Chrome team have released details about their new techniques for JavaScript streaming and caching.

  1. Script Streaming

Script streaming optimizes the parsing of JavaScript files. [...]

Starting in version 41, Chrome parses async and deferred scripts on a separate thread as soon as the download has begun. This means that parsing can complete just milliseconds after the download has finished, and results in pages loading as much as 10% faster.

  1. Code caching

Normally, the V8 engine compiles the page’s JavaScript on every visit, turning it into instructions that a processor understands. This compiled code is then discarded once a user navigates away from the page as compiled code is highly dependent on the state and context of the machine at compilation time.

Chrome 42 introduces an advanced technique of storing a local copy of the compiled code, so that when the user returns to the page the downloading, parsing, and compiling steps can all be skipped. Across all page loads, this allows Chrome to avoid about 40% of compile time and saves precious battery on mobile devices.


Opera : Carakan Engine

In practice this means that whenever a script program is about to be compiled, whose source code is identical to that of some other program that was recently compiled, we reuse the previous output from the compiler and skip the compilation step entirely. This cache is quite effective in typical browsing scenarios where one loads page after page from the same site, such as different news articles from a news service, since each page often loads the same, sometimes very large, script library.

Therefore JavaScript is cached across page reloads, two requests to the same script will not result in re-compilation.

Source


Firefox : SpiderMonkey Engine

SpiderMonkey uses Nanojit as its native back-end, a JIT compiler. The process of compiling the machine code can be seen here. In short, it appears to recompile scripts as they are loaded. However, if we take a closer look at the internals of Nanojit we see that the higher level monitor jstracer, which is used to track compilation can transition through three stages during compilation, providing a benefit to Nanojit:

The trace monitor's initial state is monitoring. This means that spidermonkey is interpreting bytecode. Every time spidermonkey interprets a backward-jump bytecode, the monitor makes note of the number of times the jump-target program-counter (PC) value has been jumped-to. This number is called the hit count for the PC. If the hit count of a particular PC reaches a threshold value, the target is considered hot.

When the monitor decides a target PC is hot, it looks in a hashtable of fragments to see if there is a fragment holding native code for that target PC. If it finds such a fragment, it transitions to executing mode. Otherwise it transitions to recording mode.

This means that for hot fragments of code the native code is cached. Meaning that will not need to be recompiled. It is not made clear is these hashed native sections are retained between page refreshes. But I would assume that they are. If anyone can find supporting evidence for this then excellent.

EDIT: It's been pointed out that Mozilla developer Boris Zbarsky has stated that Gecko does not cache compiled scripts yet. Taken from this SO answer.


Safari : JavaScriptCore/SquirelFish Engine

I think that the best answer for this implementation has already been given by someone else.

We don't currently cache the bytecode (or the native code). It is an
option we have considered, however, currently, code generation is a
trivial portion of JS execution time (< 2%), so we're not pursuing
this at the moment.

This was written by Maciej Stachowiak, the lead developer of Safari. So I think we can take that to be true.

I was unable to find any other information but you can read more about the speed improvements of the latest SquirrelFish Extreme engine here, or browse the source code here if you're feeling adventurous.


IE : Chakra Engine

There is no current information regarding IE9's JavaScript Engine (Chakra) in this field. If anyone knows anything, please comment.

This is quite unofficial, but for IE's older engine implementations, Eric Lippert (a MS developer of JScript) states in a blog reply here that:

JScript Classic acts like a compiled language in the sense that before any JScript Classic program runs, we fully syntax check the code, generate a full parse tree, and generate a bytecode. We then run the bytecode through a bytecode interpreter. In that sense, JScript is every bit as "compiled" as Java. The difference is that JScript does not allow you to persist or examine our proprietary bytecode. Also, the bytecode is much higher-level than the JVM bytecode -- the JScript Classic bytecode language is little more than a linearization of the parse tree, whereas the JVM bytecode is clearly intended to operate on a low-level stack machine.

This suggests that the bytecode does not persist in any way, and thus bytecode is not cached.

9
  • 10
    +1, excellent writeup. However, regarding Firefox, please see this StackOverflow question where Mozilla Developer Boris Zbarsky explains that Gecko currently does not do this.
    – cha0site
    Feb 13, 2012 at 18:05
  • Thanks, I saw that in my travels but couldn't find any other supporting evidence. I'll edit the answer with it.
    – Jivings
    Feb 13, 2012 at 19:20
  • 1
    Note that what was said about IE was said in 2003: IE9's JS engine first release was in IE9 in 2011.
    – gsnedders
    Feb 15, 2012 at 9:45
  • Also, Opera caches JS bytecode over more than just reloads. (Generated machine-code is not cached, however).
    – gsnedders
    Feb 15, 2012 at 9:46
  • 2
    @Jivings Take the above as a source. (I am one of the people on the Carakan team.)
    – gsnedders
    Feb 15, 2012 at 11:18
12

Opera does it, as mentioned in the other answer. (source)

Firefox (SpiderMonkey engine) does not cache bytecode. (source)

WebKit (Safari, Konqueror) does not cache bytecode. (source)

I'm not sure about IE[6/7/8] or V8 (Chrome), I think IE might do some sort of caching while V8 may not. IE is closed source so I'm not sure, but in V8 it may not make sense to cache "compiled" code since they compile straight to machine code.

3
  • 1
    IE6–8 almost certainly won't. IE9 might, but I don't have any evidence either way. Compiled JS likely isn't cached anywhere because it is quite often pretty large.
    – gsnedders
    Feb 13, 2012 at 10:38
  • @gsnedders: I'm not sure that IE8 can't technically do it, it seems that it too compiles to bytecode (not official but close), so there's no technical reason not to cache that. IE9 seems to add a JIT to compile to native code.
    – cha0site
    Feb 13, 2012 at 10:50
  • 2
    Bytecode has been used by IE for… forever. It's nothing new in IE8. It's merely just that given an interpreter the performance of the interpreter is so much slower than parse-time it's entirely irrelevant. IE9 has an entirely new (from-scratch) JS engine, so nothing follows between the two.
    – gsnedders
    Feb 13, 2012 at 11:43
3

As far as I am aware, only Opera caches the parsed JavaScript. See the section "Cached compiled programs" here.

1
  • thanks, do you have more details on other browser family too?
    – ajreal
    Feb 12, 2012 at 19:22
2

It's worth nothing that Google Dart explicitly tackles this problem via "Snapshots" - the goal is to speed up the initialization and loading time by loading the preparsed version of the code.

InfoQ has a good writeup @ http://www.infoq.com/articles/google-dart

0

I think that the correct answer would be "not always." From what I understand, both the browser and the server play a role in determining what gets cached. If you really need files to be reloaded every time, then I think you should be able to configure that from within Apache (for example). Of course, I suppose that the user's browser could be configured to ignore that setting, but that's probably unlikely.

So I would imagine that in most practical cases, the javascript files themselves are cached, but are dynamically re-interpreted each time the page loads.

0

The browser definitely makes use of caching but yes, the browsers parse the JavaScript every time a page refreshes. Because whenever a page is loaded by the browser, it creates 2 trees 1.Content tree and 2.render tree.

This render tree consists of the information about the visual layout of the dom elements. So whenever a page loads, the javascript is parsed and any dynamic changes by the javascript will like positioning the dom element, show/hide element, add/remove element will cause the browser to recreate the render tree. But the modern broswers like FF and chrome handle it slightly differently, they have the concept of incremental rendering, so whenever there are dynamic changes by the js as mentioned above, it will only cause those elements to render and repaint again.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.