1. Introduction/background:
The main memory is present in computer and many other
devices like server, mobile and other components. It is found to be one of the
most important component of the computing systems. The challenges that come
with the memory are its energy consumption, cost and capacity of storing data,
management and performance. The scaling in the memory system is done to understand
and maintain the growth of the new applications. The memory size is increasing
in demand with the development of new technologies.
2. Literature
review:
Memory system has been one of the fundamental system of
computing system. It stores data from the computer and other devices which are
being processed by the system. The architecture of dynamic random access memory
consumes energy and power. When the performance of memory is to be increased,
there comes the challenges that is associated with the energy and power. The
processing core and clients/ agents share the system of memory. All these makes
the increase in demand of more memory and bandwidth. This leads to demand of
performance and quality for the memory. Nowadays, the application requires
intensive memory data and it is increasing day by day. The requirement of such
memory that works with the application on real time as well as offline data.
The research has been undergoing with the memory with efficient data analysis.
The memory sharing is one of the technique that is being increasing with
applications on chip and efficiency improvement.
The flash memory and dynamic random access memory are two
technology that affect the memory systems. This scaling gives efficiency and
capacity that increase the cost efficiency. Phase change memory and STT-MRAM
are found to be more scalable with more bandwidth that is close to dynamic
random access memory and hard disk having less power consumption. These new
technologies gives unification of storage and memory systems.
Requirement
of memory system:
The users with increase in time are demanding more and more
memory as well the systems. With the increase in memory there is also
performance and bandwidth that is to be considered for the computing world. The
cost efficiency also plays important role in all these. There are other more
requirements that are in demand including above and they are traditional
requirements and new requirements. The two types of requirements are being
described below:
Traditional requirements focuses on capacity and performance
including cost efficiency. Today due to number of applications on computing
systems, memory sharing has become one of the important technology and DRAM
technology with scaling density. There is demand of high bandwidth long with
low latency. The application that share memory are required to use number of
technologies.
There are threefold for the requirements of memory and they
are scalability, predictability and bandwidth efficiency. The scalability is
required more in other technology than in DRAM technology. The scaling of DRAM
goes from 100nm to 30nm technology node. But now a days, the device scaling and
circuits give the DRAM which is below 30nm.
On the other hand, predictability is the other requirement
for memory. In the older days, there was less bandwidth for shared resources
and also with its less capacity. The
design concern has been focused on the performance and mitigate interface of
memory and its technology.
Methods:
The research are required to do number of fundamental design
on memory and computing systems. They are as done to overcome problem with
scaling on DRAM, use of new technology for memory, new design for good
performance and Qos for users. This part of report deals with the methods for
overcoming all these problem on memory systems. The memory components brings
challenges for scaling. The cooperation with different levels and layers of
computer and its computing stack will give the result for such problems. These
challenges varies from software to algorithm as well as the device
architectures and memory, chips, processors etc. One of the method is to
develop a new dynamic random access memory architecture which will be discussed
and researched in this topic of report. The implementation of main memory has
been done by choosing DRAM technology. This is done as it contains low latency
and cost and by reducing cell size of dynamic random access memory. When
further reduction is done on cell size, it is found to be more expensive
because of manufacturing cost. The refresh rate is increased due to leakage.
The scaling challenges are being discussed by the latest research paper by
Samsung. Cost for refreshment, latency
writing and cell over time retention are three major challenges that are being
found by the research for effective scaling in DRAM. The design of new dynamic
random access memory requires number of issues that are as given below:
- · Energy that are released after refresh and performance along with scaling should be reduced
- · Bandwidth improvement in DRAM.
- · At low cost, the increase in DRAM’s reliability
·
Movement in data along with processing element
and DRAM should be reduced.
The reduction is refresh impact plays vital role in
designing of new DRAM architecture. The refreshment of more number of cells are
required for the increase in capacity of dynamic random access memory. The
limitation is being observed in the recent researches over DRAM scaling and its
density. The graph below gives the detail about the device capacity over energy
spend on refreshing which is described as power consumption.
Fig 1: Consumed power of DRAM energy.
The other graph describes about the projected DRAM devices’
impact after being refreshed. This graph is loss in throughput on devices
capacity versus time spend for refreshment. The graph is being plotted with the
data collected after researching on devices capacity.
Results
and findings:
The
results and finding from the research above leads to the following solution
direction. They are as given below:
- · The interface migration makes the application slow and performance is increased.
- · On slowdown of application, the qualification with control in interfaces.
Conclusion
and future work:
In
conclusion, the report presents different ideas and research that gives the
technique for scaling and increasing main memory with system and its architecture.
The basic principles responsible for memory scaling are as given as a) device,
software and system cooperation. This tells that information from all the
layers in the system are being interchanged for the development and scale. b) The design of memory system is for common
case rather than worse case. c) Memory levels has heterogeneity that gives
optimization in number of matrices at one time. Number of ideas has been
discussed in this paper.
This
report shows the approach for scalable memory system and its other components
that makes the system optimization. The number of other cooperation are being
enabled with different levels. These levels includes microarchitecture and
software and other devices for scaling of memory. The overcoming of memory
scaling can be achieved using hetero-genetic in design.
No comments:
Post a Comment