Visiting the grand old topic of "compiled" vs. "interpreted" simulations, I found the following post interesting with reference to Specman & e language.
I couldn't agree more with that post - it is really beneficial to explore what can be pushed to compiled code. Some more data from my own experience during my Intel days:
1. We got 3 to 4x gain in compiled mode, though it is 5+ year old data.
2. There are (were?) some restrictions with the usage of computed macros across such compiled SO files/partitions. Not sure if they are still around. I don't recall all the details on top of my head, though if someone is interested I can try and recollect.
3. A very *important* aspect is the debuggability of compiled code. Line stepping feature gets disabled for compiled portions, so if you need to debug with line step go back to loaded mode.
4. Note that "function" level breakpoint is still possible with compiled mode.
5. If there is a null obj access, compiled mode won't reveal much details, while interpreted/loaded mode will point to exact issue. We have used this facility so often that we infact automated this process via script - i.e. if there is a null obj access, a separate process shall spawn off with same test/seed but in loaded mode. This was part of our automation we presented in DesignCon http://www.iec.org/events/2004/designcon_east/pdf/3-wp1.pdf though this specific tip/trick was left undocumented there.
6. Lastly, we did see few corener case scenarios wherein the random generation was different in both modes, Verisity knew about it back then (around 2003?) and said they will fix it sometime in future. Not sure if that still is an issue. Note that it was not easy to reproduce and was truly corner case, so don't ask me for "testcase" now :-)