Today’s high-performance and embedded applications are characterised by ever increasing demands on the memory bandwidth and size. The number of IOs of the memory subsystem is continuously growing due to this reason. However the number of IO pins is limited by the package. The consumed energy per bit for going off-chip is many times higher than the energy required for on-chip accesses. Therefore complex and power hungry IO transceiver circuits are needed to deal with the electrical characteristics of interconnections between the chips.
Moreover, the random access latencies and the internal cycle times of DRAMs are not decreasing at the same rate as the microprocessor cycle time. This problem is known as the Memory Wall in high-performance and embedded computing.
Three dimensional stacked memories have been proposed as a promising solution to the power versus bandwidth dilemma and the Memory Wall. These memories reduce the distance between CPU and external RAM from centimetres to micrometres by means of TSV (through silicon via) technology. The random access times are improved but more importantly, this technology provides a major boost in energy efficiency in comparison to standard SDR or DDR/2/3 DRAM devices. The pairing of high bandwidth communication with the lower power consumption of 3D integrated memory is an ideal fit for high-performance and embedded applications. For instance, in a terascale computing node, about 70% of the overall power is consumed by the DRAM and its interface. Our exploration of the 3D-DRAM design space showed that an optimised 3D-DRAM can reduce the energy per bit by 80% in comparison to an LPDDR2x32 DRAM.
Obviously new DRAM architectures require a new generation of DRAM controllers. Therefore our focus is currently on the design space exploration and the optimisation of the controller. Our jointly optimised DRAM and controller architecture features a flexible access to the 3D-DRAM subsystem, which enables up to 50% energy per bit savings.