X Tutup
TVTropes Now available in the app store!
Open

Follow TV Tropes

Multi-Core Processor

Go To

https://static.tvtropes.org/pmwiki/pub/images/milkshake_4147.jpg
A basic explanation of how a Multi-Core Processor works

While shrinking an existing CPU's component size or cooling it better allows you to increase its Clock Speed, physics dictates that computer chips can only get so small, unless you get exotic. Adding more stages to the instruction pipeline allows for the processor to do more in less time, but excessive pipeline length makes nonlinear code execute slowly. How can we continue to make processors faster now that all of our old tricks are starting to run into a brick wall, CPU makers asked themselves? Simple, pack more than one of these modern processor cores in each chip.

The multi-core processor tries to integrate as many CPU cores and its components as possible in a given package size. Despite the fact that CPU cores can still only do one thing at a time (until we come up with something else), the benefits to this are twofold:

  • The computer can handle as many processes as there are cores.
  • A process can take all the cores for maximum throughput.

However, there are a few caveats:

  • Performance only really improves with programs that have a lot of independent operations. For example, video editing has a lot of independent operations because the frames to be processed already exist so each core just grabs unprocessed frames as soon as it's done with the one it was working on. Many programs people use are I/O bound, which means they are waiting for some input or output device (like the hard drive, the network, or commands from the user) to do something. Video games may or may not see it, depending on what's going on. Overall, if the CPU is being used a lot, adding another core may help.
  • The cores may fight for resources, especially data operations. Even though there are more cores, there's still only one memory controller they have to share, among other things.
  • The operating system or program has to be aware of the cores and coordinate them. You do not want a CPU core to work on something another core is working on and you don't want them to stamp on each other's memory spaces.
    • This was especially a problem with the Sega Saturn and part of the reason why developers hated it. Granted, Sega's Virtua Fighter Remix did provide a good example of how to use the system, but this was a new thing at the time. Many programmers stuck with the tried and true programming for one processor.

A common misconception is that multi-core processors are the same as a single core with a multiplied speed. A single core with multiplied speed will always handle instructions faster than a multi-core processor can, especially with processes that can't run on more than one core. For example, if Processes A, B, and C had run time completion of 1, 2, and 3 seconds respectively, a quad-core processor will get this done in 3 seconds since each process goes to an available core. A single-core processor of multiplied speed will get them done in (1 + 2 + 3)/4 or 1.5 seconds.

On an aside, there's a technique some processor manufacturers use called simultaneous multithreading, branded as Hyperthreading in Intel and Multithreading in AMD. What this does is that it allows multiple instructions to be issued to available execution units in a core, since each core has several execution stages, and some stages may stall while waiting for slower I/O to catch up. Normally the OS sees this as another core or processor, but with an extra hint so it doesn't pile up on half of the cores while leaving the other idle.

In the PC world, it used to be that all cores in a CPU are equal, with perhaps cache locality where cores in a group will have faster access to a common cache, so the OS scheduler only has to assign related threads in the same group. Today heterogonous cores are becoming more common, like their cousins in mobile processors, where each core group are optimized for certain characteristic (speed, power usage, latency, etc), making it far more complex to assign threads efficiently.

The limitations of parallelizing code are spelled out by Amdahl's Law on Wikipedia.

Parallel Computing Issues

One of the main problems with parallel computing, which is multiple programs running on the same computer system, is that not only are programs sharing time on the CPU or CPUs, but they're now sharing space. Time sharing is fairly easy to overcome, because the CPU can only do one thing at a time. Sharing space is harder, because programs can occupy or want to occupy the same space. "Space" in this context means any resource the computer has, such as RAM, storage drive, a display, etc. And because these resources are limited in some fashion, you'll run into problems when multiple programs want a contested resource.

You can imagine this issue being similar to factory workers. Ideally every worker would have their own set of tools, but this can be wasteful because some tools aren't used by some workers (why does someone who's tightening bolts need a paint sprayer or the painter need a wrench?) and some are used infrequently. Giving workers only the tools they need and need frequently while making them share others helps maximize efficiency, but it can cause issues when multiple people need a limited resource.

There are several classes of problems that arise from parallel computing:

  • Deadlock
    • Program A has a resource, say a scanner, but it won't give it up until it acquires another resource, such as a printer. But Program B has the printer and it's not going to give it up until it acquires the scanner.
    • Real-life example: Two nations doing a prisoner exchange meet at a bridge, but the nations won't give up their prisoner until the other one does first.
  • Livelock
    • Multiple programs want access to a resource, but they defer obtaining it to each other, thus nobody gets the resource and the programs can't really run. A common example of this was in a computer with a shared channel, where the computers see the channel is available, but they let some other computer have a turn.
    • Real-life example: Two people are walking down the hallway, but their paths run into each other, then play the shuffle game as they try to move past each other.
  • Starvation
    • A lower priority program that is waiting for a resource to become available is constantly pre-empted by higher priority program for that resource. The lower priority program may not get to run for a long time. The typical fix for this is to gradually, but temporarily bump up the priority of the program until it gets a turn.
    • Real-life example: You're in the lowerclassmen group in college and you're trying to get into a class that's required, but all the upperclassmen and other "priority registration" groups keep taking the available seats.
  • Priority Inversion
    • A higher priority program that's waiting for a resource is pre-empted by a lower priority program because they have the resource and hasn't given it up yet.
    • Real-life example: You need to use the computer to write up a school report due tomorrow, but your sibling is using it to write up a report due next week.
  • Race Condition
    • Multiple programs are writing to a resource, but they override each other results, potentially resulting in the incorrect output. Another form is when one program is dependent on another in some way, but there's no synching mechanism. This leads to cases where the program is fine because the dependent task completes before it tries to use the results or the program fails.
    • Real-life example: Two people are editing a TV Tropes page. When they both save, the one who saved last has their content posted instead (this is actually fixed, with the first person who edits the page getting timed exclusive access).
  • Non-Atomic Operations
    • An atomic operation is something that can be done without being interrupted. This is either something so basic that it can't be interrupted, like adding two numbers together, or interrupts are explicitly disabled. Non-atomic operations are interruptible, which can cause a program who was working on a resource to be interrupted, only for program thread to modify that resource before handing back control to the original program.
    • Real-life example: A captain of an airplane is reading off a navigation chart but has to use the restroom. The co-pilot takes over, receives new information, and changes the navigation chart. When the captain comes back, they need to be aware of the change, otherwise they may assume the old information still applies.

Top
X Tutup