tsnor :
palladin9479 :
.....And now your just ignoring what I write and inventing your own descriptions. You can't redifine terms to make your invalid argument correct. Execution time is the time it takes to execute a specific instruction in transistors, it's a well known term used for designing integrated logic. .... It's definition is standardized and not open to reinterpretation by random people on internet forums. This standard definition is critically important when building instruction schedulers that are the glue that hold all our shiny CPU's together.
LOL we've been hit by an overloaded definition. Please accept that 'execution time' for example as used in 'the execution time of this program was 2.1 CPU seconds' does *NOT* mean the time it takes to execute a specific instruction in transistors with an infinite l1 cache. It refers to wall clock time the CPU was considered busy by the operating system (or whatever tool was measuring performance), and included all of the pipe stalls, cache misses, etc that contribute to the wall clock measured execution time.
Rereading your posts they make sense with your definition. Pretty funny.
Execution time is an actual term, it's not a definition up for debate. It's the length of time a specific instruction takes to execute in silicon, not how long it takes to setup the instruction. I linked documentation that describes the various execution latencies during execution stage, which is the same thing as execution time. And I've said, several times, that this number is important because you design your scheduler around it. The scheduler doesn't care about how long, in real time, it takes an instruction to pass down the entire pipeline, it only cares about how in absolute cycles it takes for each stage in order to optimize instruction flow.
We should still have room as far as your argument goes because:
Light travels ~29.979 cm in one nanosecond, meaning that, technically, a light-foot is ~1.0167 nanoseconds.[5]
https://en.wikipedia.org/wiki/Nanosecond
and 29 centimeters is about a foot, there should be some room left as far as the trace length goes.
I can do the math if you disagree, just being lazy atm.
If there was a fiber optic wire connecting the memory controller to the very last chip on the line then yet, but because the signal needs to pass through several gates and be analyzed on it's path, then no. Also the path isn't perfect with minimal resistance and perfect uniformity, there are solder joins, DIMM contacts and some slight EMI from other bus's along the way. All those little things add up to a practical limit of about ~7 ~ 7.5ns for command latency. Now if you had DRAM chips soldered onto the same PCB as the CPU and sitting right next to it, then you can go a bit lower with command latency. If you avoided the solder entirely, used a serial bus and put the memory onto the same package as the CPU, then you could make real headway.
Ultimately it's not that big a deal, CPU prefetching and caching hide most of the latency from the user, it's only when there is a mispredict and the data isn't inside any cache that command latency really becomes an issue.