Optimize later: functions
Dec. 3rd, 2013 12:49 amConsider a generic function like this one:
function square(x){ return x * x; }
In C, you tell the compiler what data type 'x' is and the compiler will print out some corresponding assembly code and consider that the canonical square() function. In an object oriented system, the system adds a layer of overhead to track what objects have what interfaces and will decide what to do at runtime.
Imagine something in between with JIT-like compilation based on how the caller uses the function.
- If the caller will send an int, compile some assembly code using an int.
- If the caller will send a float, compile some assembly code using a float.
- If the value in the caller is known to be low enough to not need a 64-bit int, like if it's in range(0..10), and if the smaller data types are faster on this hardware, then use a smaller data type.
- If the caller will send an object that has an overridden * method, see if it's possible to optimize that.
- If the data type is indeterminate, go with the high-overhead object oriented method.
There would be no one canonical square() implementation. The compiler/interpreter would be aware of several different square() implementations and would choose to use or create a particular one based on the circumstances.
Now consider having these different compiled segments stored in the binary, with the high level version also available for any subroutines that might need it.
Do any language environments do this?