What is a user friendly way to explain how a GPU and a CPU communicates, and how a GPU works?

Thread Starter

spikespiegelbebop

Joined Nov 30, 2021
146
I'm writing an article and I need to explain the following in a very simple way, in a way that most people can understand it:

Let's analyze how a GPU works, the video memory takes graphic information from the storage unit, saves it and sends it to the graphics processor to process it, the "path" between the video memory and the graphics processor is defined by the bandwidth from memory, if we consider this to be a road that follows a straight line, the wider this road, the more cars can pass through it, consequently leaving the traffic faster/freer.
Sorry, this is translated from Portuguese, I'm not sure if it's 100% accurate.
Basically what I need is a way to explain how a GPU and a CPU communicates, and also how a GPU works, but in a language for dummies.
 

Papabravo

Joined Feb 24, 2006
21,159
The GPU and the CPU are independent autonomous machines executing preprogrammed sequences of instructions. Let us assume the existence of a 3rd machine whose only purpose is to move information from the video memory in the storage unit to the graphics processor memory. Let us further assume that the information can be transferred without any additional modification. Let's call it the MMT (Memory to Memory Transfer) machine.

This machine can operate in multiple ways:
  1. It can send the data over a pathway that is 1-bit wide at some rate measured in bits/second. To compute the time it takes to send a complete frame you multiply the number of bits in the image times the bit rate.
  2. We can make the pathway one byte or 8-bits wide to move 8 times more bits in the same amount of time at the expense of more hardware to implement 8 one-bit channels instead of a single one-bit channel.
  3. Now you can extend the process 64-bit word to move 64 times more bits in the same amount of time at the expense of more hardware to implement 64 one-bit channels instead of a single one-bit channel.
Does that do it for you?
 

Thread Starter

spikespiegelbebop

Joined Nov 30, 2021
146
The GPU and the CPU are independent autonomous machines executing preprogrammed sequences of instructions. Let us assume the existence of a 3rd machine whose only purpose is to move information from the video memory in the storage unit to the graphics processor memory. Let us further assume that the information can be transferred without any additional modification. Let's call it the MMT (Memory to Memory Transfer) machine.

This machine can operate in multiple ways:
  1. It can send the data over a pathway that is 1-bit wide at some rate measured in bits/second. To compute the time it takes to send a complete frame you multiply the number of bits in the image times the bit rate.
  2. We can make the pathway one byte or 8-bits wide to move 8 times more bits in the same amount of time at the expense of more hardware to implement 8 one-bit channels instead of a single one-bit channel.
  3. Now you can extend the process 64-bit word to move 64 times more bits in the same amount of time at the expense of more hardware to implement 64 one-bit channels instead of a single one-bit channel.
What do you mean by storage unit? RAM or the actual storage unit (Flash unit/HDD/Bluray)?

Does that do it for you?
I believe it's ok, but it lacks an analogy. Could you use the PlayStation 2 Tech Demo as an analogy?
I'm actually writing an article about the PS2.

This is the tech demo:
 

Papabravo

Joined Feb 24, 2006
21,159
What do you mean by storage unit? RAM or the actual storage unit (Flash unit/HDD/Bluray)?



I believe it's ok, but it lacks an analogy. Could you use the PlayStation 2 Tech Demo as an analogy?
I'm actually writing an article about the PS2.

This is the tech demo:
I used the terminology in your original post. I don't know squat about the PlayStation 2.
The CPU and the GPU can have incompatible memory formats. The job of the MMT is to take the contents of the CPU memory, probably RAM, and move it to the GPU memory, also probably RAM. The central question is the width of the pathway which can be as small as one in a machine with extreme budget constraints or as large as n, where n is finite and probably in the range of [1, 64], although slightly larger values would not surprise me in something like a DCS (Digital combat Simulator), especially if cost was no object.
 

MrChips

Joined Oct 2, 2009
30,708
The CPU is a general purpose data processor. It’s function is to take multiple commands and data from various processes and peripherals and produce the desired results.

Producing real-time graphical information on a display device is an example of one such function. This is a very intensive and mostly repetitive operation. It consumes a large proportion of the CPU’s available time and resources, memory in particular. The purpose of the GPU is to offload as much of the graphics functions to an auxiliary device. This reduces the workload placed on the CPU.

A graphics image requires a lot a memory which need not be accessible to the CPU.
For example, if the CPU needs the entire screen to be blue, the CPU does not need to address every pixel. The CPU can send a command to the GPU, “make the screen blue” and let the GPU do its magic.
 

Papabravo

Joined Feb 24, 2006
21,159
The CPU is a general purpose data processor. It’s function is to take multiple commands and data from various processes and peripherals and produce the desired results.

Producing real-time graphical information on a display device is an example of one such function. This is a very intensive and mostly repetitive operation. It consumes a large proportion of the CPU’s available time and resources, memory in particular. The purpose of the GPU is to offload as much of the graphics functions to an auxiliary device. This reduces the workload placed on the CPU.

A graphics image requires a lot a memory which need not be accessible to the CPU.
For example, if the CPU needs the entire screen to be blue, the CPU does not need to address every pixel. The CPU can send a command to the GPU, “make the screen blue” and let the GPU do its magic.
This is correct and illustrates another complication with any simplified explanation. Displays come in many flavors. Two common types are:
  1. The raster displays where, like a traditional television, the image is created one line at a time left to right and top to bottom. In the implementation, the display may be interlaced by alternating between odd numbered lines and even numbered lines.
  2. A vector display where the GPU has a list of objects which it paints on the screen one at a time in a repetitive fashion. This type of display is amenable to processing higher level commands.
It has been a while since I worked with displays so there may be additional possibilities.
 
Top