Hi Matt. First of all, impressive work you're doing here :)
I suspect that performance could be improved considerably if each opcode was 32bit instead of 64bit. It seems like the reason they're 64bit is to encode immediate floats or jump positions. What about switching to 32bit opcodes and when an immediate value is expected, the interpreter steps one word forward and reads the immediate at that location, before continuing with the interpreter loop as usual?
|
#define IMM(d) (((float*)(d))[1]) |
|
#define JUMP_TARGET(d) (((int32_t*)(d))[1]) |
I might try this out soon, but until then this issue can be a place for discussion..
Hi Matt. First of all, impressive work you're doing here :)
I suspect that performance could be improved considerably if each opcode was 32bit instead of 64bit. It seems like the reason they're 64bit is to encode immediate floats or jump positions. What about switching to 32bit opcodes and when an immediate value is expected, the interpreter steps one word forward and reads the immediate at that location, before continuing with the interpreter loop as usual?
mpr/inc/clause.hpp
Lines 22 to 23 in eb63def
I might try this out soon, but until then this issue can be a place for discussion..