We pump Ikea: how to turn light music into Big Brother


Introduction


Ikea is a curious store. Even if you came into it with the intention of buying one specific thing and not be distracted by all the rest of the junk, then you will exit by buying three times more than necessary. For us hackers, this effect is especially evident in the Ikea department with power cables or batteries.

For me, the last case of such insanity occurred in the Netherlands. I was on a planned trip, planning to spend only a few weeks in the country. However, this was at the very beginning of 2020 and the disaster with COVID-19 ... I had to stay far from my laboratory much longer.

And when I saw Frekevens' electronic product line at Ikea , I couldn't help it and bought it all.

For those who don’t know: the Ikea Frekvens line should become a modern analogue of the old-school HiFi set plus the hexagonal light music set, which seems to be in the homes of half of the 90s teenagers. In this case, the line of devices connected to each other consists of a Bluetooth speaker for sound output and an LED projector (complemented by various nozzles) that flicker to the beat of the music, as well as an LED cube display that can display animations that also move to the music. Ikea boasts that this line was developed in conjunction with Teenage Engineering, known for its Pocket Operator series of portable “toy” synthesizers.


As shown in the Ikea promotional video , the finished purchase of dozens of devices should look like an unbearable flickering carnival, which my inner 15-year-old teenager would not mind arranging in his room.

I was particularly interested in the LED cube display: it is a high-quality integrated device that works on the mains. On the front panel is a matrix of 16 rows and 16 columns of very bright white LEDs, that is, a total of 256 LEDs. The animations built into the device were pretty simple, but they led me to the idea that iron can actually control each LED individually. However, because of this, the small box was quite expensive: my pocket was empty by about 40 euros. For this amount, you get a box with LEDs and a power cable. Since it is assumed that you must buy this product along with other Frekvens products, you also receive additional connecting elements: an extension power cable for end-to-end supply of power from the mains to other boxes,as well as a set of screws and plastic pads with which you can mechanically fasten the housing to each other.

As soon as I got home, I connected the device and ... to be honest, I was a little disappointed with what they did with the iron. The device had several animations that could be selected by pressing a button on the back of the case; a small microphone is installed in the case - every time it recognizes noise, the frame of the animation changes. Animations are not very interesting: they consist of only four to five frames, and the frames themselves are only black and white, without shades of gray. If you're curious, I recorded a short video with all the animations.

Obviously, this did not suit me. Let's hack the device - let's see what makes it work, and is it possible to make its behavior a little more interesting.

Autopsy


To disassemble Frekvens, a screwdriver for a cross-shaped slot and, possibly, a knife will be enough. You will also need a solid share of perseverance, because the device is not easy to disassemble: the design is complicated and some elements are filled with silicone, probably to get rid of vibration and simplify manufacturing.


The disassembly of the Frekvens case starts from behind. The housing has screw holes on all sides, with the exception of the front; thanks to the set of screws and covers from the device’s kit, it can be connected to other Frekvens gadgets. However, the rear holes for the screws are a little deeper and under them there are screws that secure the case.


Having removed the back cover, we see that all the screw holes have threaded metal inserts into which the screws can be screwed. Wow! This should provide the holes with a sufficient margin of safety. It is possible that this greatly complicates the production or disassembly; the latter is complicated by the fact that we first need to cut the drops of hardened silicone compound.


The inserts are actually fastened with other plastic inserts; they need to be removed first.

I have no other photos of the disassembly process, but after this stage you will see four screws that need to be unscrewed, and then the process continues as before. Understand which plastic piece holds everything else together, cut the silicone compound, remove the plastic fasteners. Continue until you can pull out the circuit board.


Here it is: the reverse side of the circuit board. As it turned out, it is assembled from two printed circuit boards: the white one has two sides - on the back there are all the LED drivers (and another board is attached to it), and on the front there are the LEDs themselves.

The white circuit board is designed pretty well: instead of a matrix of LEDs, there are 16 SCT2024 LED driversworking from a direct current. These LED drivers have 16 channels, that is, each LED is directly controlled. LED drivers can only fully turn on or off the LEDs; the drivers themselves do not have native grayscale support. In fact, they perform register shift using current-controlled outputs, and they all switch sequentially; The green PCB connection interface consists of its synchronization bus and data, a snap enable line, an output, ground, and power enable line.

On this green board is what’s the tiny brain of the device: a microcontroller, the article of which cannot be found on Google, and an operational amplifier for the microphone. Interestingly, there are hints that the device should have more features: there is a place for, presumably, I2C EEPROM 24Cxx and a hint that in the original project there was an IR receiver on the front panel. Perhaps it was supposed that the device would come with a remote control that allows you to create your own patterns? This may explain that the existing animations look so dull.


On the front of the white board are all the LEDs. A tiny flat microphone is also soldered to it. At first I thought that it was soldered to the green board and sticks out through the hole in the white board, but in fact it is just flat.


All other electronics are located on the back of the board. It is not enough there: only a power supply under the Ikea brand, issuing 4 V. Under the power supply there is a small printed circuit board on which there are two tactile switches for buttons on the back of the case.

So, if we want to “optimize” the design, then we have an obvious weakness: it is better to replace a small green card with a processor with something more powerful.

We modify iron


I thought it might be possible to replace the equipment with an ESP-Cam. This is a very cheap (10 euros or so) board with an ESP32 WiFi / BT chip, several megabytes of PSRAM, and a camera module. I'm not very interested in light music, I wanted the device to respond to something visual. Since there was already a hole in the front end for the receiver, I thought I could use it under the camera. I will also need to drill a hole in the white board where the microphone is located; fortunately, besides these now useless microphone tracks, this will destroy only one track of the LED. I drilled a hole and restored the track. Temporarily putting the board back in place and turning on the cube, I made sure that all the LEDs are still working.


Now I had a board with a drilled hole, and I could only connect and mechanically connect the ESP-Cam. The designers kindly indicated the purpose of all the pads on the silkscreen of the green board, so I almost had to guess nothing. Since the power supply supplies only 4 V, I connected it directly to the 3.3 V input of the ESP-CAM chip. I had to do this, because when I connected it to the Vin 5V pin, the chip did not want to start ... Most likely, the LDO regulator has a slightly high voltage drop. Going back, I now think that it was worth trying to connect the diode in series with a 4 V source to get 3.3-3.4 V, but my solution worked; ESP32 is quite unpretentious.


Soldering all the connectors, I had to deal with the mechanics anyway. It was a “quick and dirty” project, and I had already soldered the ESP-Cam connectors before, so to align the camera with the hole, you could do with several gaskets and a lot of epoxy.


After this operation, the last problem remained: the LEDs next to the camera shone on its frame window, causing all kinds of strange artifacts. First of all, to eliminate them, I cut a small bezel from an aluminum tape to protect the camera from direct light.


The second step was the application of acrylic paint. This is a plastic insert on which all the diffusers are located, through which the LEDs shine. I already enlarged the hole that it had for the microphone in it to create a wider frame window for the camera, but this led to another problem - the hole was too noticeable and too much diffused light fell on the camera. A stain of black acrylic paint did pretty well on both issues; the camera still received quite a lot of light from the LEDs, but now it did not flood it completely.

So, brain transplantation is complete and the body is ready for assembly. You can start the software.

Software


Let's move on to the software. First of all, we need to gain control over the LED drivers. This is not difficult: the driver signals are almost directly consistent with the ESP32 chip's SPI peripheral interface: the CLK pad on the board connects to the SPI peripheral CLK interface, the DA pad on the board connects to the SPI signal's MOSI. This will allow you to synchronize 256 bits with a chain of LED drivers: each synchronized bit controls the on or off of the corresponding LED. In addition, there is an LAK entrance. If the signal on it is low, then the LED retains its previous value, regardless of what was transferred to the shift registers; if the signal is high, then all of them simultaneously switch to the values ​​that are currently in the shift register. Finally, there is an EN input that turns on or off all the LEDs;I did not connect it to GPIO and just always turn on the LEDs programmatically.

Thus, I can now turn on and off individual LEDs, getting a black and white display. However, I need something else. With sufficient computing power, the display should also be capable of displaying shades of gray - for this, you just need to turn on / off the LEDs faster than the eye sees.

To implement this with a single LED, I usually use PWM, but in the case of 256 LEDs, this will take up a significant part of the power of the ESP32 processor. So instead, I decided to use Binary Code Modulation (also called Bit Angle Modulation). This is a technique that allows you to get shades of gray with less CPU power. I have already successfully used it in other projects , so I was sure that it would work.


And it worked. I also added a lookup table to convert CIE lightness to PWM , because the eye essentially responds to the brightness of the LEDs non-linearly; lookup table fixes this problem. In the end, I managed to achieve a sufficient amount of shades of gray and linear scaling of the optical brightness in accordance with the pixel value set by me.

It was not difficult to implement the camera project: there is an ESP-IDF component that can be added to the project; he will take care of setting up the camera and exchanging data with it, you just need to specify the parameters you need, and then request the bitmap that the camera sees. The only thing I could not use in the standard setting was that automatic alignment and automatic exposure were turned on by default, and this pretty much interfered with my way of controlling the LEDs: depending on which part of the Binary Code Modulation sequence the image was created, the camera rather randomly increased or decreased alignment and exposure. I fixed the problem by switching the camera to manual alignment and shutter speed; the code itself examines the image and determines whether these parameters need compensation.Thanks to such manual processing, I also managed to exclude pixels that looked directly at the LEDs in order to completely remove them from the equation. This also greatly helped in obtaining a stable image.

Now I have a camera and a 16x16 canvas in shades of gray. What should I do with them? I got an idea: at the time of the very first Macintosh computers, there was a popular extension that only added a couple of cartoon eyes to the menu bar. These eyes simply followed the cursor. (A variation of this theme for Linux / Unix called "xeyes" still exists.) What if I try to repeat it in real life?


For starters, I needed some images of the eyes. I did not think that I could draw something worthwhile, so instead I decided to use my own eyes as basic pictures; watch a short video in which I watch and blink in all directions. It is worth considering that due to the current situation I had very few professional tools: I lie on the floor to maximize the lighting falling from the ceiling fluorescent lamps and get a legible image; All the video was shot on the smartphone that I hold in my hands.

Since the initial data was rather sloppy, I had to work hard on post-processing. I began by cutting out a fragment that was approximately around my right eye. Then I converted each frame of the video into an image and deleted all redundant images; in the end I had a good selection of images of me looking in different directions, as well as a few frames with a blink. However, since I used a mobile phone for recording, the video turned out to be a little shaky and the images skipped. To fix this, I passed a set of images through the Hugin image glue programwhich is usually used for panoramas and HDR images; at the exit, I got pictures ideally centered on this part of my face. Now I could only mark in which direction I was looking, and whether I blinked. I did this by first converting all the images to shades of gray and then uploading them to Gimp . In each image, I used a red dot to indicate the location of the center of the pupil, plus a red dot in the left or right corner to indicate if I blink, and if I blink, then my eye is half open, or closed.


After marking each image in this way, it became very simple to write a script to get the location of the pupil, as well as the state of blinking. The script also scaled the images to 16x16 and saved them as raw binary data, ready to be written to the ESP-CAM module firmware. In the end, I got a set of images and an index of where the pupil is and in what state of blinking the eye is.

The ESP-Cam has a fairly easy-to-use camera library component for the ESP-IDF, so getting images was easy. I set up grabbing 120x160 images in shades of gray, because they are the easiest to process, and I did not need a lot of resolution, because the final result should be displayed on a 16x16 screen. However, I still had a hardware problem: the camera is still too close to the LEDs.


At first, I tried to solve it by calibration: when the device starts up, it takes two pictures: one with an LED close to the camera lens, and one only with an LED located far from the camera. Subtracting one image from another, you can understand which pixels are affected by LEDs. These pixels are stored in the mask; pixels marked in the mask are subsequently ignored. The image above shows two shots and a mask.

With a cleaner image, I could move on to motion recognition. I implemented it by getting a frame from the camera and subtracting the previous frame from it. Then I was able to calculate the amount of motion by adding up all the resulting pixels. Further, it was possible to calculate the location of the concentration of motion by taking the average of the coordinates of all pixels weighted by the difference of frames in this pixel. In the end, the filtering magic is performed so that the device does not look jerked around like a jack Russell Terrier with ADHD: to attract his attention, objects must either move more uniformly or move far enough.

There was only one problem: when it was very dark, the motion recognition algorithm sometimes worked on the reflection of LEDs on shiny objects in the room: if the eye blinked, then pay attention to its own reflection. This is not quite what I wanted, so I circumvented this problem by making the motion recognition algorithm “blind” when changing the image on the LEDs; so I stopped this behavior. Moreover, it is also a little more realistic: when you blink, you cannot see.

The firmware has several debugging modes that demonstrate the whole process. Here is a short video illustrating this:


Conclusion


So now I have a device that is following me ... not really useful, I agree. But it was interesting for me to create it, and if I ever find something more useful that can be implemented on a 16x16 LED screen, then I can do it, because I already have a control code. Speaking of code: you can freely download my raw result from here . Hope you enjoyed it and take care.


All Articles