Can anyone help me understand the best data types to use on a 32-bit architecture like an ESP32? Are there still benefits to using uint8_t and the like?
The answer is that it depends - it depends on what you’re storing in the values, it depends on whether or not you care what happens if you add 1 to 255 (if you want it to cycle back to 0, use a uint8_t), it depends on whether or not you care how much memory it’s going to take up, etc… etc…
It’s kind of tricky. That said, the RGBW rewrite of FastLED will also include support for 16 and 32 bit versions of CRGB, CRGBW, CHSV, etc… (though i’m torn on the global brightness value - whether I want it to be 0-255 across the board, or if I want that to scale as well ¯_(ツ)_/¯ ) – and there are optimizations that i’m going to be able to take on 32-bit systems that I won’t be able to on the 8-bit ones (yeah, I know, the arm cpus have like 50-400Mhz cores, but I still care about clock cycles
I just mean for miscellaneous variables around my code. I can’t count the number of bugs I’ve made by prematurely optimizing variable sizes and later increasing something like NUM_LEDS above 256…
For the rewrite, would a 32 bit version of CRGB still look like 0-255 but allow things like summing all channels in a single operation? I don’t understand the benefit of higher-bit channels is if the strips still expect 8.
It sounds like in general I’m not saving clock cycles by minimizing the number of bits (barring fancy exceptions)? But possibly there could be space savings if the compiler is smart enough to merge multiple 8-bit variables?
(Thanks for everything you do by the way. There was so much amazing art on playa running your code!)
Basically it gives you much finer grained control in what you’re doing at a higher level, pre color correction and such. We may also be able to play more interesting games with dithering, even at full brightness, when you’re using 16-bit RGB values for 8-bit chip outputs.
Also - there are LED control chipsets out there that are 12 and 16-bit : )