The computer architecture community is at an interesting crossroads. Moore's Law is slowing down, stressing traditional assumptions around computing getting cheaper and faster over time—assumptions that underpin a significant fraction of the economic growth over the past few decades. But at the same time, our demand continues to grow at phenomenal rates, with deeper analysis over growing volumes of data, new diverse workloads in the cloud, smarter edge devices, and new security constraints. Is the situation dire, or is this the beginning of a new phase in the evolution of system architecture?
Two recent trends provide hope that it is the latter! The first trend, at a microarchitecture level, is around specialization or domain-specific hardware/software codesign. Compared to a general-purpose processor, a specialized architecture such as an ASIC (application-specific integrated circuit) customizes the design for a specific application or workload class. A good example is Google's TPU series of ASICs. Such specialization leads to significant area and power efficiencies. The trade-off, of course, is we now do not have the volume advantages of a general-purpose system, whether it is around software ecosystem support (and ease of development) or around amortization of costs associated with building a custom chip (notably, the non-recurring expenses or NRE). The second trend, at a system level, is around warehouse-scale computing, or more broadly cloud computing, a computing model that treats the entire "datacenter as a computer." This model helps amortize costs across larger ensembles, but also provides additional benefits around ubiquitous access, simpler system management, and better encapsulation of hardware under higher-level software interfaces and abstractions. Initially popularized by large Internet services such as search, email, and social networks, cloud computing is now increasingly being adopted by traditional enterprises as well.
No entries found