If you’ve been around the computer industry very long, you’ve seen several ways to determine the performance of a system. From the early days of basic processor clock speed through the expansion in the number of processors to the number of processing cores in the processing units, the way we look at computer performance has changed as the processing architecture and the software that uses it has changed. Today, if you want to have a broad understanding of system performance, look to the cores.
Why has the number of processing cores become so important?
Part of the reason is that it has become possible to place multiple processing cores in a single-chip package. Basically, you can shove more processors into a very limited bit of real estate. The cores in a single package also tend to be tightly coupled, sharing cache and memory from a single bus. That’s important because of the way that software has evolved to take advantage of the multiple cores.
Once upon a time, operating system and compiler architects designed their software to divide tasks into big, simple tasks: in a dual processor system, the operating system might execute on one processor while applications are executed on another, for example. Today, both applications and operating systems will split themselves along much finer-grain lines, with each able to split itself into multiple tasks that can run simultaneously on the maximum number of cores available. What this means is that for applications and tasks that are “compute bound” or performance limited by the speed of the processor rather than storage access or network bandwidth, multiple cores can make a huge difference in the execution speed of the program.
GPU architecture
The ultimate example of this is in modern super-computers, many of which are based on GPU architectures. GPUs (graphical processing units) can have thousands of cores, where the most commonly available general-purpose CPU chips top out at 8 cores. Programmers for applications running on these GPU-based super-computers will make use of the OS job scheduler to break the application into thousands of pieces that are run simultaneously, with the results re-assembled before presentation.
So why doesn’t every engineering workstation have GPUs at its core?
Some do, but the sad fact is that not every problem lends itself to being divided up in massively parallel fashion. Modern general purpose engineering workstations walk a fine line between performance profiles for a number of different problem sets. Multiple multi-core processors will provide numeric application performance that’s much better than that seen on a system with fewer cores while still allowing general purpose operating systems like Windows or a common Linux distribution to be used.
Look to the cores when you’re specifying equipment for an engineering workstation. You’ll still need adequate memory and fast storage, but the maximum number of processing cores is the shortest path to the best performance in systems that will be useful for the full range of tasks faced by the modern engineer.