Central processing unit

Mobile phones have central processing units (CPUs), similar to those in computers, but optimised to operate in low power environments. Mobile CPU performance depends not only on the clock rate (generally given in multiples of hertz)[10] but also the memory hierarchy also greatly affects overall performance. Because of these problems, the performance of mobile phone CPUs is often more appropriately given by scores derived from various standardized tests to measure the real effective performance in commonly used applications. A central processing unit (CPU), also referred to as a central processor unit,[1] is the hardware within a computer system or smartphone which carries out the instructions of a computer program by performing the basic arithmetical, logical, and input/output operations of the system. The term has been in use in the computer industry at least since the early 1960s.[2] The form, design, and implementation of CPUs have changed over the course of their history, but their fundamental operation remains much the same. On large machines, CPUs require one or more printed circuit boards. On personal computers and small workstations, the CPU is housed in a single silicon chip called a microprocessor. Since the 1970s the microprocessor class of CPUs has almost completely overtaken all other CPU implementations. Modern CPUs are large scale integrated circuits in packages typically less than four centimeters square, with hundreds of connecting pins. Two typical components of a CPU are the arithmetic logic unit (ALU), which performs arithmetic and logical operations, and the control unit (CU), which extracts instructions from memory and decodes and executes them, cal ing on the ALU when necessary. Not all computational systems rely on a central processing unit. An array processor or vector processor has multiple parallel computing elements, with no one unit considered the "center". In the distributed computing model, problems are solved by a distributed interconnected set of processors. The term memory hierarchy is used in computer architecture when discussing performance issues in computer architectural design, algorithm predictions, and the lower level programming constructs such as involving locality of reference. A 'memory hierarchy' in computer storage distinguishes each level in the 'hierarchy' by response time. Since response time, complexity, and capacity are related,[1] the levels may also be distinguished by the controlling technology. The many trade-offs in designing for high performance will include the structure of the memory hierarchy, i.e. the size and technology of each component. So the various components can be viewed as forming a hierarchy of memories (m1,m2,...,mn) in which each member mi is in a sense subordinate to the next highest member mi-1 of the hierarchy. To limit waiting by higher levels, a lower level will respond by filling a buffer and then signaling to activate the transfer. There are four major storage levels.[1] Internal Processor registers and cache. Main the system RAM and controller cards. On-line mass storage Secondary storage. Off-line bulk storage Tertiary and Off-line storage. This is a most general memory hierarchy structuring. Many other structures are useful. For example, a paging algorithm may be considered as a level for virtual memory when designing a computer architecture.