Hi all, I've been thinking about LED data compression lately.

gplus
(Chris Parton) #1

Hi all,

I’ve been thinking about LED data compression lately. I have a Raspberry Pi server that sends RGB data wirelessly over UDP to multiple ESP32 controllers. I’m currently serving data for 1000 pixels at 60fps and it’s working great, but I’m planning on adding more pixels and controllers and I think I’ll start hitting performance issues.

I’m considering either a general compression algorithm, or using heuristics on each frame of data to optimise them.

For instance, if every pixel for a given frame is the same colour, I can just send one colour value and an indication that this is “optimisation type 1: single colour” or similar.

Has anyone else done anything similar?
Thanks!

(Yves BAZIN) #2

@Chris_Parton I would be interested in knowing how you synchronise your esp32s I am looking to do something similar. How many leds/esp do you have.
Regarding the compression what could be done is to look at a standard compressing software library in C/C++ and port it to the raspberry and the esp32. I am willing to work with you on that.
I am driving 6000leds at 30fps (did not try to go higher) with one esp32.

(Chris Parton) #3

@Yves_BAZIN I’m developing an LED sequencer called Sparkled (https://github.com/sparkled/sparkled) that contains a UI for creating sequences and a server to play music and stream synchronised data to the ESP32s.

Here’s a (pretty crappy) video of it running last year:
https://youtu.be/LjgJN3SzmT0. I’ve been rewriting a lot of stuff to not be hard-coded to my setup, which has been a lot of work :slight_smile:

Because I’m sending out individual frame data consisting of ~150 RGB values, I don’t know how well it will compress with a standard compressor. It could be fine, I’ll just have to try it.

I’m running one ESP32 per strip in the video, so 8 ESP32s driving either 50 UCS1903 or 150 WS2812B pixels. I wasn’t hitting any performance ceilings with that amount, and I’m sure a faster PC could drive many more.

(John Corbett) #4

I’ve been considering taking differences before run-length encoding. It is simple and could provide big savings for ramps and fades as well as solid colors. A good general compression algorithm should make deltas redundant, but would likely be slower and more code.

(Yves BAZIN) #5

@Chris_Parton Indeed 150 pix per esp32 is really light you have margin with your actual setting.
Indeed sending only 150rbg not sur you will gain a lot by compressing unless you write the compression rule yourself.
Then using and esp32 you can always do the decompression on the second core

(Chris Parton) #6

@Yves_BAZIN I want to support other clients in the future (possibly ESP8266, Arduino with Ethernet breakout etc.), so I have to be careful not to implement anything that takes too much memory to decompress on the client. I think a set of simple heuristics will be the best solution.

(Chris Parton) #7

@John_Corbett the tricky part about deltas for my scenario is that packet loss and other factors mean that not every frame makes it to the client.

I could request a frame from the server and tell it which frame I currently have, but that means I’d be calculating deltas on the fly, which I want to avoid.

(Tod Kurt) #8

Are you more concerned about in-frame compression or intra-frame compression? If in-frame, the easiest would probably be run-length encoding (RLE). (where if you have a run of all the same color in the frame, like 100 pixels of #FF00FF, you send “100” and a single #FF00FF)

If intra-frame compression, then you could do the deltas as mentioned (with RLE since good chance all deltas would be the same or zero) and to avoid the issue @Chris_Parton mentions, take a page from MPEG and send periodic full-frames (no deltas, “I-frames” in MPEG parlance) as a way to resync and recover from any lost packets.

(Frédéric Delhoume) #9

You might want to decide on either lossless or lossy compression. In both cases, I would personally look at well established image formats, such as PNG (Zlib) for lossless and JPEG for lossy. There is huge support for zlib so I would try to make my display code use directly PNGs instead of RGB arrays. This way you have to create on the server a PNG, and decode it on the client, using well known libraries. Note that RLE is better suited to Indexed (palette) images than RGB images.

(Frédéric Delhoume) #10

ESP32 somehow has support for a subset of zlib : https://www.esp32.com/viewtopic.php?t=2265

(Frédéric Delhoume) #11

If PNG adds to much overhead you can use directly a deflate raw stream for your raw data, with additional info on width and height or other necessary info. Inflate (decompression) is one call using most C libraries (uzlib or zlib) and deflate (compression) is available as one call in Java and in C libraries (zlib uzlib). This makes a simple prototype do-able in very short time.


https://docs.oracle.com/javase/7/docs/api/java/util/zip/Deflater.html

(Frédéric Delhoume) #12

See also https://github.com/richgel999/miniz that seems to be the library included in ESP32 ROM