[NOTE: The following post is pretty much irrelevant to this case. Go ahead and
ignore this, unless you want to know more, and didn't know where to
look.]
The words "static" and "dynamic" are used in many different
contexts in computer science. But usually, "dynamic" has a connotation of
"changing" or "at run-time" and static means "not changing" or "at compile
time".
Compilers usually have an "optimizer" which performs "static
optimizations" by looking at the program (which is usually in some intermediate
form at the time, neither source code nor the final binary output format) and
analysing it and doing transformations that should make it run faster. The
reason these are "static optimizations" is because they only use static
(unchanging) information about the program -- that is, the program itself! They
don't use any "dynamic" (changing) information, because that is information that
can only be gathered at run-time : very short-term information about how things
are going in a particular run. A "dynamic optimization" would be something like
"this function has been called 100 times very recently, so we know it is
hot and we will optimize it more". Or another example might be "of the
last 100 times this if-test was executed, the condition was false 90% of the
time. So we'll generate a version of the code that assumes it will be
false, and is faster for that case."
From another point of view: Dynamic
optimizations occur while the program is running and use information
collected during that same run. Dynamic optimizations react to the
dynamically-changing run-time conditions of the program. Generally only JIT
("just-in-time") compilers do dynamic optimizations, because they can generate
new code whenever they want, in response to the dynamic information they have
collected.
Traditional static compilers (also known as "off-line" compilers,
or "ahead of time" compilers) only have a few options. They have to decide what
code to emit, once and for all, during the compiling (before the program is even
running). They can usually use only static information, and do only static
optimizations (the most common kind). Some of them can also do something called
"profile-guided optimization" (PGO), where you first compile an "instrumented"
version of the program, and then you run that version and the instrumentation
stuff collects the dynamic info while you run it (a "profile"). Then you
compile it again, and this time it uses your collected profile to make better
optimization decisions. If the profile says the if-statement's condition is
true 90% of the time, the optimizer can take advantage of that to generate
better code (which branches are taken/not taken is dynamic info, that a
traditional compiler doesn't usually have). But even with PGO, this is still
considered a "static optimization" because the program is not running at the
time the optimization is done, and because the optimized code is generated once,
and not changed anymore at run-time. (So if the optimizer guesses wrong, it
can't correct its mistake later. And if your "profile" is not very
representative, the optimizations might actually make the program slower.)
A
JIT ("just in time") compiler is a compiler that runs while your program is
running. Some Java VMs have one of these (the Sun/Oracle VM has one called
"HotSpot", unless they've changed its name). A JIT compiler generates all of
its executable code while the program is running. Its optimizations have to be
lightweight and fast, because you pay for them while the program is running. So
they collect dynamic information to find out which parts of the program are most
important, and spend more time doing a better job of optimizing those. The same
method might be re-compiled several times, with different optimizations each
time, as the VM learns how important it is. Also, when the program conditions
change (i.e. if-statement used to be 90% true, but lately it seems to be 80%
false) a JIT compiler can notice this and re-optimize the method to be better in
the new situation. This is why it's a "dynamic" optimization.
As for
resolving references: "static linking" is done "off-line". You combine
different chunks of code together into one, and resolve the symbolic references
between them (chunk A uses a method called Foo from chunk B, so when linking
them into a program, the linker decides where exactly it's going to put Foo, and
then it can replace that symbolic reference in A with the actual address. It
forgets that it calls a function called Foo, and just remembers that it calls a
function at address 12345.) "dynamic linking" is the same thing, but it happens
"on-line" (while the program is running). When a Windows application loads a
.DLL, the program resolves symbolic references to the functions in the DLL (with
DLLs, these references are called "imports"). But this is done by the loader,
at run-time.
Another example of a "dynamic optimization" would be a
Polymorphic Inline Cache (PIC). This is a self-modifying code technique used in
some Smalltalk VMs. Each call site keeps a small cache of methods that it has
recently invoked. Before doing a full method lookup, it checks to see if one of
the cached methods is the correct one. The cache is made with self-modifying
code for performance reasons. On a "cache miss", it does a full method lookup
and then updates the self-modifying code so that next time, if the same message
is sent, it will be fast. [ Reply to This | Parent | # ]
|