Saturation is like the opposite of converting an image into greyscale.
Every pixel is stored as a values of red, green, or blue. These are simple numerical values between 0 and 255.
Red is (255,0,0) and green is (0,255,0).
When these values are all the same value, the pixel is greyscale. (0,0,0) is black and (255,255,255) is white.
Converting an image to greyscale involves averaging the values together. Red (255,0,0) might become (127,127,127).
If greyscale is the three values (rgb) merging towards the same color, then saturation is when these values are further apart from one another. Increasing the saturation increases the difference between these values without changing the overall tonal brightness of the pixel.
That’s the easy way to think about it. It’s also sort of wrong.
Except… changing the saturation of an image isn’t quite as simple as pushing the “R, G, and B” values further from each other, as this would eventually leave us with just red, green, and blue pixels; and would distort what the colors are percieved as during the process. A purple might become blue as it saturates. Computers need to keep the ratios between the values proportionate.
One way to do this in real life is as follows:
First, the color is converted into a different color-space. A color-space is a 3 dimensional map of colors. RGB space is a 3D space with red, green, and blue values each on an axis (think XYZ).
We convert to a different color space such as HSL. HSL is Hue, Saturation, Lightness as the axis’s, as opposed to by quantity of red,green, and blue values.
With the colors organized by hue,saturation, and lightness. Then one pushes the colors along the saturation axis easily, and then convert back to RGB space. These conversions can be done with matrix transformations, if you remember your linear algebra from college. If not, don’t worry about any of this.