News SK Hynix Begins Manufacture of 18GB LPDDR5 Memory Packages

DDR 2 DDR 3 DDR 4 DDR 5 .. why arent we going TDR or QDR ? been using DDR technology long time already

Triple data rate would be quite awkward. DDR and QDR configure logically around the rising and falling of the clock signal; QDR uses separate data buses for read and write. At last check, QDR was absurdly expensive, making it unsuitable for consumer applications.
 
DDR 2 DDR 3 DDR 4 DDR 5 .. why arent we going TDR or QDR ? been using DDR technology long time already
The only thing QDR does is reduce the amount of power needed to drive the bus clock at the expense of extra circuitry in the GDDR5+ chips to internally generate the extra edges for the QDR IOs.

QDR uses separate data buses for read and write.
No, QDR only means nothing more than the data rate is 4X the clock frequency. Even GDDR6X with its new PAM4 signaling still shares IO pins between reads and writes. What "got split" with GDDR6 is that the chips are internally structured as two independent 16bits-wide banks, so you can configure them as either 1x32bits or 2x16bits.
 
Hmm, what's the source of my confusion then? I'm unsure now.
You equated QDR to having separate DDR R/W ports. Those two things are completely unrelated, GDDR5/5X/6/6X all have single QDR data ports.

The chip you linked is an SRAM that takes commands and operates at QDR internally so it can then split its internal bandwidth between its separate DDR read and write ports. Quite different application from GDDR5+ which has DDR command and single-port QDR data.

Also, the main reason your SRAM chip is "absurdly expensive" isn't QDR, it is simply the fact that it is an SRAM chip and those are inherently far more expensive to make per bit due to much lower density. Also doesn't help that it is a relatively low bandwidth (192MHz max clock) niche product intended to supplement FPGA SRAM, which makes it a (very) low volume part and further drives cost up.
 
Looks like someone in marketing convinced the higher ups to drink the numbers koolaid.
Not really. This announcement is about DRAM packages and bare chips have pretty much always been marketed based on raw specs. The only difference here is that prior to DDR5, ECC was optional and was implemented by including the extra DRAM package(s) for ECC bits.

Now that ECC is becoming standard and DDR5 DIMMs have split the slot into two independent sub-channels, it makes more sense to build the extra bits directly into the individual DRAM chips than have to add an oddball 4-bits wide DRAM for ECC to each half of a 36+36 bus setup.
 
Well that is a welcome change.

Eight bits is the default for non-ECC RAM while nine bits is the default for ECC RAM, the extra bit is used for error correction (afaik you can correct a one bit error on the fly and detect a two bit error but not correct for it during runtime). Computer speeds/threads ever increasing means those cosmic rays (or some such) have a higher probability of flipping a single bit (or more)). Now maybe mainstream prices for RAM (with ECC) vs the prior division between the two types.
 
Computer speeds/threads ever increasing means those cosmic rays (or some such) have a higher probability of flipping a single bit (or more)).
Except CPU speed, core count and all that other stuff has little to do with what ECC helps the most with in the context of DRAM.

DRAM stores data in tiny capacitors and those capacitors are getting smaller with each DRAM generation on top of operating at lower voltages. The smaller they are, the more likely they are to flip from things like cosmic rays, hiccups in the refresh cycle, interference from too many writes in neighbor rows, etc. and those can happen irrespective of CPU activity. That is where the bulk of random errors occur. Up to DDR4, the cells were large enough that the likelihood of single-bit errors was considered a non-issue for non-critical use. Starting with DDR5 though, cell sizes will be getting small enough that the probability of raw bit errors could become problematic even for consumer computing, so JEDEC decided to throw ECC in just in case. If DRAM manufacturers can push their luck with 50% smaller cells at the expense of sacrificing 12.5% of the cell space for ECC, they are still 60+% ahead on user bits per wafer.