I wanted to run a simple and quick script that could backup Postgres WAL into S3. The following simple Bash script does exactly what I need:

pg_recvlogical -f -  --start -d database  -h 127.0.0.1 \ 
  -U postgres --slot test_slot | gsplit  -dl 1000000 \ 
  --filter='bzip2 -9 > $(date +%s).bz2' - logical

This will continuously monitor Postgres WAL through a WAL slot named test_slot (must be previously configured),  split the data after 1'000,000 lines, BZip it and save it as a file with the current Unix timestamp as date.

Combining this with say S3FS one could keep a continuous  compressed copy of ALL changes happening to the Postgres databases. In case some catastrophic failure happen, the data could be restored by replaying the WAL.  This assumes that the received WAL is in JSON format, which provides every change as a one line JSON string.