Has anyone seen any good examples of mapping from an evenly-spaced in-memory matrix to

Has anyone seen any good examples of mapping from an evenly-spaced in-memory matrix to irregularly spaced physical pixels? I’ve seen that FadeCandy does it but (AFAIK) it requires processing running elsewhere on a powerful device, sampling from the screen. I’d like to do it on a Teensy/etc without an external dependency.

I’m thinking that pixel locations could be expressed in a way that makes it easy to combine nearby matrix cells. For instance, a pixel at (2.5, 4.5) could be calculated as “1/4 of (2,4) + 1/4 of (3,4) + 1/4 of (2,5) + 1/4 of (3,5)”

Then, since the matrix itself is often just a dithered version of the original pattern source… I wonder if it’s even possible to skip the intermediate matrix and distribute to the pixels upon (X,Y) write?

Anyway, looking for inspiration in existing libraries. Thanks!

So you mean something like having you pixels arranged in a spiral, but wanting to have an underlying XY matrix simulation, which sort of “renders” onto the pixels?

yep! exactly.

So there are two parts to this: recording the actually physical X,Y position of each pixel in your design (“mapping the pixels”), and then using the pixel map in your rendering.

While it’s definitely not “beginner” code, @Daniel_Garcia 's “GridMap” code does the second part of this very nicely – including an arbitrary ‘rotation’ angle of the mapping from underlying simulation to actual pixels.

Code is here https://gist.github.com/focalintent/5f97216341278976e749

That code assume that you’ve already plotted out the actual physical X and Y coordinate of each of your pixels, and that they’re stored in an array. There are a number of ways to get to that point, as I’m sure you can imagine.

Really helpful, thanks! I like the idea of the mapping to a 255x255 virtual space… I’ll play with this and see how it goes…

There is also a processing sketch that I want to add some UI to (select video input, serial output, etc…) then i’ll publish it - that will allow you to build an accurate 2d map of your pixel locations. This is a one shot thing, that’ll then give you an array of the X and Y coordinates of all your leds.

i have a lot of examples of this on my youtube channel, my method: i use a processing sketch to map the pixels using a webcam, turn that into a mapping file, then finally use that to generate appropriate files for running direct off an SD card on the embedded device (i use teensy 3.1 mostly these days but this method works for regular arduino uno style devices as well since it’s a direct SD read, no skipping about). I’ve had to write a suite of tools for this - a mapping file creator which uses either mouse clicks or a webcam and single lights or a webcam and patterns to get the positions, a couple of things to either take a folder full of pictures or of videos and pass them through the mapping to make a file for running off SD and then the embedded software to load up the files and show them on the other end. preprocessing is totally the way to go, it’s just not good enough otherwise and i’ve been using FFT data to alter values on the embedded side so i still get live audio reactivity.

I only preprocess to get the X/Y coordinates - I have a number of patterns that determine their value based off of X/Y - so then ij ust iterate through the leds - get the X/Y location, determine the value for that pixel, and go on. (Alternatively, I render a pattern into a 16x16 array, and use interpolation to map the values from there to the X/Y positions of each individual pixel).

I should be up front though, I have a major allergic reaction to static patterns, or pre-computed imagery in my projects, so I go through quite a bit to avoid that :slight_smile:

i really like that you did it on the embedded side, the code is mostly beyond me but i get the gist, i’m probably going to try something similar but to map a small grid of audio data taken from the teensy 3.1 audio shield into affecting a precomputed file from SD, similar to what my latest does but with proper 2D layouts possible without the linear patterns appearing from modulating SD images by linear FFT data. I’d love to use all generated patterns but i just can’t yet fathom decent ways to do it with free-style layouts (ie using mapping) without the extra memory needed, and indeed wasted, for those spaces in between. I think it might be possible in a limited fashion…
Also as someone who actually really agrees with no pre-computing in general then i finally decided to justify it by the fact that if i just use big enough and perfectly looping ‘images’ then even an educated observer can’t tell. :smiley:

@devicer that’s clever and yeah, if your entire sequence is preprocessed then you can do it all offline & just directly feed the mapped LEDs. I like the idea of your use of embedded-side morphing to add reactivity back in. (Maybe you simulate it by feeding a palette-index video through & then morphing the palette in real time?)

Theoretically it seems one could take symbolic math one step further than Daniel’s sketch to preprocess all pixels into the 4 points they’re sampling, revealing the elements of the underlying matrix that “matter”. A sparse representation would be bounded in size by ~4 x NUM_LEDS…

@Daniel_Garcia Thanks, this code has been really helpful. It does exactly what I was hoping someone had already created :slight_smile:

@Steve_Eisner that’s what mine does at the moment - preprocesses to a 3*NUM_LEDS+headersize bytes file which has all the data for the mapping-processed video file (or picture or whatever). previously i was using the equivalent of a single picture that was shown line-by-line on lights which looked weird with configurations including screens or similar, now with mapping then the same size file contains the essential parts of a complete video or complex visualiser wrapped up into linear chunks to read off SD in exactly the same way - but producing the effect of a sparse 2d screen. (i hope my explanations are not too confusing!) I’ve just moved house and been without net for 3 weeks so am only just getting the chaos under control but i’ve probably got some code bits to share soon and more videos of the end result.