Plain text files can typically be compressed a lot.  Using a format such as ZIP, which is very rapidly decompressed, it is usually faster to read the compressed file than its uncompressed version.  Let us take https://archive.ics.uci.edu/ml/machine-learning-databases/00280/HIGGS.csv.gz as an example.  The ZIP file weights 2.7 GB:
<code>$ ls -lh HIGGS.csv.gz
-rw-rw-r-- 1 banana banana 2.7G Mar  28 15:34 HIGGS.csv.gz</code>
On my system (i5-7400 processor, up to 3.5 GHz; the file is on a HDD: Hitachi Travelstar 7K1000), it takes less than 53 seconds to read that compressed file and count the 11,000,000 lines with at least one 0 in it (just to do something more interesting that 'zcat HIGGS.csv.gz > /dev/null'):
<code>$ time zgrep -c 0 HIGGS.csv.gz
11000000
52.51user 3.14system 0:52.70elapsed 105%CPU (0avgtext+0avgdata 2260maxresident)k
0inputs+0outputs (0major+401minor)pagefaults 0swaps</code>
The uncompressed version weights 7.5 GB (2.8 times more):
<code>$ gunzip HIGGS.csv.gz
$ ls -lh HIGGS.csv
-rw-rw-r-- 1 banana banana 7.5G Mar 28 15:34 HIGGS.csv</code>
It takes more than 59 seconds (13% more) to read that uncompressed file and count the number of lines with at least one 0 in it:
<code>$ time grep -c 0 HIGGS.csv
11000000
3.08user 23.96system 0:59.11elapsed 45%CPU (0avgtext+0avgdata 2308maxresident)k
15109744inputs+0outputs (1major+103minor)pagefaults 0swaps</code>
It is however obviously less CPU-demanding (no decompression).

If the sole objective is to save disk space compressing text files you do not plan to read anytime soon, prefer XZ.  It enables significantly higher compression ratios.  In our example, the XZ file weights more than twice less than the ZIP file:
<code>$ xz HIGGS.csv
$ ls -lh HIGGS.csv.xz
-rw-rw-r-- 1 banana banana 1.3G Mar 28 15:34 HIGGS.csv.xz</code>
One reason to not always prefer XZ is that it is slower to decompress XZ files than ZIP files, here 70% slower:
<code>$ time xzgrep -c 0 HIGGS.csv.xz
11000000
88.47user 6.94system 1:29.84elapsed 106%CPU (0avgtext+0avgdata 10140maxresident)k
1583624inputs+0outputs (0major+2465minor)pagefaults 0swaps</code>
Another reason is that compressing with xz requires a lot of time: the xz command above took more than one hour and 49 minutes.  Compressing HIGGS.csv with gzip only takes seven minutes and 16 seconds: more than 15 times less!  bzip2 is intermediary.  It takes 11 minutes and 3 second and the resulting file weights 2.0 GB.  However, it is by far the slowest to decompress: 'bzgrep -c 0 HIGGS.csv.bz2' requires five minutes and 11 seconds to complete.

Notice that I actually picked a bad example (I downloaded 2.7 GB for it: I use it!): plain text files can often be compressed far more and the gains are larger, including the time gain when reading ZIP files rather than the uncompressed text.
