The basic concept behind compression

Data systems as we know are built to understand and work in "bits" of memory which are then represented by 0s and 1s. Whatever you see on a screen is represented by 0s and 1s, since we use only two(2) numbers to represent everything, that's also the reason why all the memory spaces are in multiples of two. Signals ( on and off) are two, and genders are... Oops nvm

Any compression algorithm revolves around this basic concept that it is easier to represent the data in a short-form/ encoded format rather than representing its actual form

For example, let's consider two friends Alan and Christopher. Alan wants to send a letter to Christopher, if both Alan and Christopher would decide on a "code" that for example, the letter "A" would stand for "1", then

Alan -------------------------------------------------------> Christopher {1}

The data to be sent from Alan to Christopher would just be a single bit "1" and since this data is formed on a pre-decided code, this data is said to be "encoded", Christopher would then receive "1" and quickly remember that "1" is nothing but "A"

Otherwise, normally the data would've been sent like

Alan -------------------------------------------------------> Christopher {01000001}

01000001 is the binary representation of "A"
As you can see, normal data is 8 times larger than the encoded data.

This example is grounded in very minimal terms, but the logic or the idea remains the same no matter how complex the compression algorithm is.

So, next time someone says there's encoding/decoding happening, you know why it's happening and how it's happening.

Did you find this article valuable?

Support Kunal Dubey by becoming a sponsor. Any amount is appreciated!