Programming a game for an embedded device on ESP32: drive, battery, sound

image


Start: assembly system, input, display .

Part 4: drive


Odroid Go has a microSD card slot, which will be useful for downloading resources (sprites, sound files, fonts), and possibly even to save the state of the game.

The card reader is connected via SPI, but IDF makes it easy to interact with the SD card by abstracting SPI calls and using standard POSIX functions like fopen , fread and fwrite . All of this is based on the FatFs library , so the SD card must be formatted in the standard FAT format.

It is connected to the same SPI bus as the LCD, but uses a different chip selection line. When we need to read or write to the SD card (and this does not happen very often), the SPI driver will switch the CS signal from the display to the SD card reader, and then perform the operation. This means that while sending data to the display, we cannot perform any operations with the SD card, and vice versa.

At the moment we are doing everything in one thread and are using blocking transmission via SPI to the display, so there can be no simultaneous transactions with the SD card and with the LCD display. In any case, there is a high probability that we will load all resources at launch time.

Modification of ESP-IDF


If we try to initialize the interface of the SD card after the initialization of the display, we will encounter a problem that makes it impossible to load Odroid Go. ESP-IDF v4.0 does not support shared access to the SPI bus when used with an SD card. Recently, developers have added this functionality, but it is not yet in a stable release, so we will make a small modification to the IDF ourselves.

Comment out the line 303 esp-idf / components / driver / sdspi_host.c :

// Initialize SPI bus
esp_err_t ret = spi_bus_initialize((spi_host_device_t)slot, &buscfg,
    slot_config->dma_channel);
if (ret != ESP_OK) {
    ESP_LOGD(TAG, "spi_bus_initialize failed with rc=0x%x", ret);
    //return ret;
}

After making this change, we will still see an error during initialization, but it will no longer cause the ESP32 to restart, because the error code does not propagate above.

Initialization




We need to tell IDF which ESP32 pins are connected to the MicroSD reader so that it correctly configures the underlying SPI driver, which actually communicates with the reader.

The general notes VSPI.XXXX are again used in the diagram , but we can go through them to the actual contact numbers on ESP32.

Initialization is similar to LCD initialization, but instead of the general SPI configuration structure, we use sdspi_slot_config_t , which is intended for an SD card connected via the SPI bus. We configure the corresponding contact numbers and card mount properties in the FatFS system.

IDF documentation does not recommend using the esp_vfs_fat_sdmmc_mount functionin the code of the finished program. This is a wrapper function that performs a lot of operations for us, but so far it works quite normally, and probably nothing will change in the future.

The "/ sdcard" parameter of this function sets the virtual mount point of the SD card, which we will then use as a prefix when working with files. If we had a file named “test.txt” on our SD card, the path we would use to link to it would be “/sdcard/test.txt”.

After the initialization of the interface of the SD card, the interaction with the files is trivial: we can simply use standard calls to POSIX functions , which is very convenient.

8.3, . , fopen . make menuconfig, , 8.3.



I created a 64x64 sprite in Aseprite (terrible) that uses only two colors: completely black (pixel disabled) and completely white (pixel enabled). Aseprite does not have the option to save RGB565 color or export as a raw bitmap (i.e. without compression and image headers), so I exported the sprite to a temporary PNG format.

Then, using ImageMagick, I converted the data to a PPM file, which turned the image into raw uncompressed data with a simple header. Next, I opened the image in a hex editor, deleted the header and converted the 24-bit color to 16-bit, deleting all occurrences 0x000000 to 0x0000 , and all occurrences 0xFFFFFF to 0xFFFF. The byte order here is not a problem, because 0x0000 and 0xFFFF do not change when changing the byte order.

The raw file can be downloaded from here .

FILE* spriteFile = fopen("/sdcard/key", "r");
assert(spriteFile);

uint16_t* sprite = (uint16_t*)malloc(64 * 64 * sizeof(uint16_t));

for (int i = 0; i < 64; ++i)
{
	for (int j = 0; j < 64; ++j)
	{
		fread(sprite, sizeof(uint16_t), 64 * 64, spriteFile);
	}
}

fclose(spriteFile);

First, we open the key file containing raw bytes and read it into the buffer. In the future, we will load sprite resources differently, but for a demo this is quite enough.

int spriteRow = 0;
int spriteCol = 0;

for (int row = y; row < y + 64; ++row)
{
	spriteCol = 0;

	for (int col = x; col < x + 64; ++col)
	{
		uint16_t pixelColor = sprite[64 * spriteRow + spriteCol];

		if (pixelColor != 0)
		{
			gFramebuffer[row * LCD_WIDTH + col] = color;
		}

		++spriteCol;
	}

	++spriteRow;
}

To draw a sprite, we iteratively traverse its contents. If the pixel is white, then we draw it in the color selected by the buttons. If it is black, then we consider it a background and do not draw.


My phone’s camera is very color distorted. And sorry for shaking her.

To test the recording of the image, we will move the key to some place on the screen, change its color, and then write the frame buffer to the SD card so that it can be viewed on the computer.

if (input.menu)
{
	const char* snapFilename = "/sdcard/framebuf";

	ESP_LOGI(LOG_TAG, "Writing snapshot to %s", snapFilename);

	FILE* snapFile = fopen(snapFilename, "wb");
	assert(snapFile);

		fwrite(gFramebuffer, sizeof(gFramebuffer[0]), LCD_WIDTH * LCD_HEIGHT, snapFile);
	}

	fclose(snapFile);
}

Pressing the Menu key saves the contents of the frame buffer to a file called framebuf . This will be a raw frame buffer, so the pixels will still remain in RGB565 format with the byte order reversed. We can again use ImageMagick to convert this format to PNG to view it on a computer.

convert -depth 16 -size 320x240+0 -endian msb rgb565:FRAMEBUF snap.png

Of course, we can implement reading / writing to BMP / PNG format and get rid of all this fuss with ImageMagick, but this is just a demo code. So far I have not decided which file format I want to use for storing sprites.


Here he is! The Odroid Go frame buffer is displayed on the desktop computer.

References



Part 5: battery


Odroid Go has a lithium-ion battery, so we can create a game that you can play on the go. This is a tempting idea for someone who played the first Gameboy as a child.

Therefore, we need a way to request the battery level of the Odroid Go. The battery is connected to the contact on the ESP32, so we can read the voltage to have an approximate idea of ​​the remaining operating time.

Scheme



The diagram shows IO36 connected to VBAT voltage after being pulled to ground through a resistor. Two resistors ( R21 and R23 ) form a voltage divider similar to that used on the cross of the gamepad; the resistors again have the same resistance so that the voltage is half the original.

Due to the voltage divider, IO36 will read a voltage equal to half VBAT . This is probably done because the ADC contacts on the ESP32 cannot read the high voltage of the lithium-ion battery (4.2 V at maximum charge). Be that as it may, this means that to get the true voltage, you need to double the voltage read from the ADC (ADC).

When reading the value of IO36, we get a digital value, but lose the analog value it represents. We need a way to interpret a digital value with an ADC in the form of a physical analog voltage.

IDF allows you to calibrate the ADC, which tries to give a voltage level based on the reference voltage. This reference voltage ( Vref ) is 1100 mV by default, but due to physical characteristics, each device is slightly different. ESP32 in Odroid Go has a manually defined Vref, “flashed” in eFuse, which we can use as a more accurate Vref.

The procedure will be as follows: first, we will configure the ADC calibration, and when we want to read the voltage, we will take a certain number of samples (for example, 20) to calculate the average readings; then we use the IDF to convert these readings to voltage. Calculation of the average eliminates noise and gives more accurate readings.

Unfortunately, there is no linear connection between voltage and battery charge. When the charge decreases, the voltage drops, when it increases, it rises, but in an unpredictable way. All that can be said: if the voltage is below about 3.6 V, then the battery is discharged, but it is surprisingly difficult to accurately convert the voltage level into a percentage of the battery charge.

For our project, this is not particularly important. We can implement a rough approximation to let the player know about the need to quickly charge the device, but we will not suffer, trying to get the exact percentage.

Status LED



On the front panel under the Odroid Go screen there is a blue LED (LED), which we can use for any purpose. You can show them that the device is turned on and working, but in this case, when playing in the dark, a bright blue LED will shine in your face. Therefore, we will use it to indicate a low battery charge (although I would prefer a red or amber color for this).

To use the LED, you need to set IO2 as an output, and then apply a high or low signal to it to turn the LED on and off.

I think that a 2 kΩ resistor ( current limiting resistor ) will be enough so that we do not burn the LED and supply too much current from the GPIO pin.

The LED has a rather low resistance, so if 3.3 V is applied to it, then we will burn it by changing the current. To protect against this, a resistor is usually connected in series with the LED.

However, the current limiting resistors for LEDs are usually much less than 2 kΩ, so I do not understand why the R7 resistor is such a resistance.

Initialization


static const adc1_channel_t BATTERY_READ_PIN = ADC1_GPIO36_CHANNEL;
static const gpio_num_t BATTERY_LED_PIN = GPIO_NUM_2;

static esp_adc_cal_characteristics_t gCharacteristics;

void Odroid_InitializeBatteryReader()
{
	// Configure LED
	{
		gpio_config_t gpioConfig = {};

		gpioConfig.mode = GPIO_MODE_OUTPUT;
		gpioConfig.pin_bit_mask = 1ULL << BATTERY_LED_PIN;

		ESP_ERROR_CHECK(gpio_config(&gpioConfig));
	}

	// Configure ADC
	{
		adc1_config_width(ADC_WIDTH_BIT_12);
    	adc1_config_channel_atten(BATTERY_READ_PIN, ADC_ATTEN_DB_11);
    	adc1_config_channel_atten(BATTERY_READ_PIN, ADC_ATTEN_DB_11);

    	esp_adc_cal_value_t type = esp_adc_cal_characterize(
    		ADC_UNIT_1, ADC_ATTEN_DB_11, ADC_WIDTH_BIT_12, 1100, &gCharacteristics);

    	assert(type == ESP_ADC_CAL_VAL_EFUSE_VREF);
    }

	ESP_LOGI(LOG_TAG, "Battery reader initialized");
}

First we set the GPIO LED as an output so that we can switch it if necessary. Then we configure the ADC pin, as we did in the case of a cross - with a bit width of 12 and minimal attenuation.

esp_adc_cal_characterize performs calculations for us to characterize the ADC so that we can later convert the digital readings into physical stress.

Battery Read


uint32_t Odroid_ReadBatteryLevel(void)
{
	const int SAMPLE_COUNT = 20;


	uint32_t raw = 0;

	for (int sampleIndex = 0; sampleIndex < SAMPLE_COUNT; ++sampleIndex)
	{
		raw += adc1_get_raw(BATTERY_READ_PIN);
	}

	raw /= SAMPLE_COUNT;


	uint32_t voltage = 2 * esp_adc_cal_raw_to_voltage(raw, &gCharacteristics);

	return voltage;
}

We take twenty raw samples of the ADC from the contact of the ADC, and then divide them to get the average value. As mentioned above, this helps to reduce the noise of the readings.

Then we use esp_adc_cal_raw_to_voltage to convert the raw value to the real voltage. Due to the voltage divider mentioned above, we double the return value: the read value will be half the actual battery voltage.

Instead of coming up with tricky ways to convert this voltage to a percentage of the battery charge, we will return a simple voltage. Let the calling function decide for itself what to do with the voltage - whether to turn it into a percentage of the charge, or simply interpret it as a high or low value.

The value is returned in millivolts, so the calling function needs to perform the appropriate conversion. This prevents float overflow.

LED setting


void Odroid_EnableBatteryLight(void)
{
	gpio_set_level(BATTERY_LED_PIN, 1);
}

void Odroid_DisableBatteryLight(void)
{
	gpio_set_level(BATTERY_LED_PIN, 0);
}

These two simple functions are enough to use the LED. We can either turn on or turn off the light. Let the calling function decide when to do it.

We could create a task that would periodically monitor the battery voltage and accordingly switch the status of the LED, but I’d better interrogate the battery voltage in our main cycle, and then decide how to set the battery voltage from there.

Demo


uint32_t batteryLevel = Odroid_ReadBatteryLevel();

if (batteryLevel < 3600)
{
	Odroid_EnableBatteryLight();
}
else
{
	Odroid_DisableBatteryLight();
}

We can simply request the battery level in the main cycle, and if the voltage is below the threshold value, turn on the LED, indicating the need for charging. Based on the materials studied, I can say that 3600 mV (3.6 V) is a good sign of a low charge of lithium-ion batteries, but the batteries themselves are complex.

References



Part 6: sound


The final step to getting a complete interface to all Odroid Go hardware is to write a sound layer. Having finished with this, we can begin to move towards a more general programming of the game, less related to programming for Odroid. All interaction with peripherals will be performed through the Odroid functions .

Due to my lack of experience with sound programming and the lack of good documentation on the part of IDF, when working on a project, the implementation of sound took the most time.

Ultimately, not so much code was required to play the sound. Most of the time was spent on how to convert the audio data to the desired ESP32 and how to configure the ESP32 audio driver to match the hardware configuration.

Digital Sound Basics


Digital sound consists of two parts: recording and playback .

Record


To record sound on a computer, we first need to convert it from the space of a continuous (analog) signal into the space of a discrete (digital) signal. This task is accomplished using an analog-to-digital converter (ADC) (which we talked about when we worked with the cross in Part 2).

The ADC receives a sample of the incoming wave and digitizes the value, which can then be saved to a file.

Play


A digital sound file can be returned from digital to analog space using a Digital-to-Analog Converter (DAC) . DAC can reproduce values ​​only in a certain range. For example, an 8-bit DAC with a 3.3 V source can output analog voltages in the range from 0 to 3.3 mV in 12.9 mV steps (3.3 V divided by 256).

The DAC takes digital values ​​and converts them back to voltage, which can be transmitted to an amplifier, speaker, or any other device capable of receiving an analog audio signal.

Sampling rate


When recording analog sound through the ADC, samples are taken at a certain frequency, and each sample is a “snapshot” of the sound signal at a point in time. This parameter is called the sampling frequency and is measured in hertz .

The higher the sampling frequency, the more accurately we recreate the frequencies of the original signal. The Nyquist-Shannon (Kotelnikov) theorem states (in simple terms) that the sampling frequency should be twice the highest signal frequency we want to record.

The human ear can hear approximately in the range from 20 Hz to 20 kHz , so the sampling frequency of 44.1 kHz is most often used to recreate high-quality music, which is slightly more than twice the maximum frequency that the human ear can recognize. This ensures that a complete set of instrument frequencies and voice will be recreated.

However, each sample takes up space in the file, so we cannot select the maximum sampling rate. However, if you do not sample fast enough, you can lose important information. The sampling frequency selected should depend on the frequencies present in the recreated sound.

Playback should be performed at the same sampling frequency as the source, otherwise the sound and its duration will be different.

Suppose ten seconds of sound were recorded at a sampling frequency of 16 kHz. If you play it with a frequency of 8 kHz, then its tone will be lower, and the duration will be twenty seconds. If you play it with a sampling frequency of 32 kHz, then the audible tone will be higher, and the sound itself will last five seconds.

This video shows the difference in sample rates with examples.

Bit depth


Sampling frequency is only half the equation. The sound also has a bit depth , that is, the number of bits per sample.

When the ADC captures a sample of an audio signal, it must convert this analog value to digital, and the range of captured values ​​depends on the number of bits used. 8 bits (256 values), 16 bits (65,526 values), 32 bits (4,294,967,296 values), etc.

The number of bits per sample is related to the dynamic range of the sound, i.e. with the loudest and quietest parts. The most common bit depth for music is 16 bits.

During playback, it is necessary to provide the same bit depth as the source, otherwise the sound and its duration will change.

For example, you have an audio file with four samples stored as 8 bits: [0x25, 0xAB, 0x34, 0x80]. If you try to play them as if they were 16-bit, you will get only two samples: [0x25AB, 0x3480]. This will not only lead to incorrect values ​​of sound samples, but also halve the number of samples, and hence the duration of the sound.

It is also important to know the format of the samples. 8-bit unsigned, 8-bit unsigned, 16-bit unsigned, 16-bit unsigned, etc. Usually 8-bit are unsigned, and 16-bit are signed. If they are confused, the sound will be greatly distorted.

This video shows the bit depth difference with examples.

Wav files


Most often, raw audio data on a computer is stored in the WAV format , which has a simple header that describes the audio format (sampling frequency, bit depth, size, etc.), followed by the audio data itself.

The sound is not compressed at all (unlike formats like MP3), so we can easily play it without the need for a codec library.

The main problem with WAV files is that due to the lack of compression, they can be quite large. File size is directly related to the duration, sampling rate, and bit depth.

Size = Duration (in seconds) x Sampling Rate (samples / s) x Bit Depth (bit / sample)

The sampling frequency affects the file size the most, so the easiest way to save space is to select a sufficiently low value. We will create an old-school sound, so a low sampling frequency suits us.

I2S


ESP32 has peripherals, due to which it is relatively simple to provide an interface with audio equipment: Inter-IC Sound (I2S) .

The I2S protocol is quite simple and consists of only three signals: a clock signal, a choice of channels (left or right), and also the data line itself.

The clock frequency depends on the sampling frequency, bit depth and number of channels. Beats are replaced for each bit of data, therefore, for proper sound reproduction, you must set the clock frequency accordingly.

Clock frequency = Sampling frequency (samples / s) x Bit depth (bits / sample) x Number of channels

The ESP32 microcontroller I2S driver has two possible modes: it can either output data to the contacts connected to an external I2S receiver, which can decode the protocol and transfer data to the amplifier, or it can transfer data to the internal ESP32 DAC outputting an analog signal that can be transmitted to amplifier.

Odroid Go does not have any I2S decoder on the board, so we will have to use the internal 8-bit ESP32 DAC, that is, we must use 8-bit sound. The device has two DACs, one connected to IO25 , the other to IO26 .

The procedure looks like this:

  1. We transfer audio data to the I2S driver
  2. I2S driver sends audio data to 8-bit internal DAC
  3. The internal DAC outputs an analog signal
  4. The analog signal is transmitted to the sound amplifier



If we look at the audio circuit in the Odroid Go circuit , we will see two GPIO pins ( IO25 and IO26 ) connected to the inputs of the sound amplifier ( PAM8304A ). IO25 is
also connected to the signal / SD of the amplifier, that is, the contact that turns the amplifier on or off (low signal means shutdown). The amplifier outputs are connected to one speaker ( P1 ).

Remember that IO25 and IO26 are outputs of 8-bit ESP32 DACs, that is, one DAC is connected to IN- and the other to IN + .

IN- and IN + aredifferential inputs of the sound amplifier. Differential inputs are used to reduce noise caused by electromagnetic interference . Any noise present in one signal will also be present in another. One signal is subtracted from another, which eliminates noise.

If you look at the specification of the sound amplifier , then it has a Typical Applications Circuit , which is the manufacturer’s recommended way to use the amplifier.


He recommends connecting IN- to ground, IN + to the input signal, and / SD to the on / off signal. If there is a noise of 0.005 V, then with IN- 0V + 0.005V will be read , and with IN + - VIN + 0.005V . The input signals must be subtracted from each other and get the true signal value ( VIN ) without noise.

However, the designers of Odroid Go did not use the recommended configuration.

Once again looking at the Odroid Go circuit, we see that the designers connected the DAC output to IN- and that the same DAC output is connected to / SD . / SD- This is a shutdown signal with an active low level, so for the amplifier to work, you need to set a high signal.

This means that to use the amplifier, we must not use the IO25 as a DAC, but as a GPIO output with an always high signal. However, in this case, a high signal is set to IN- , which is not recommended by the specification of the amplifier (it must be grounded). Then we must use the DAC connected to IO26 , since our I2S output must be fed to IN + . This means that we will not achieve the necessary noise reduction, because IN- is not connected to ground. Soft noise constantly emanates from the speakers.

We need to ensure the correct configuration of the I2S driver, because we want to use only the DAC connected to IO26 . If we used a DAC connected to IO25 , it would constantly switch the amplifier off signal, and the sound would be terrible.

In addition to this weirdness, when using an 8-bit internal DAC, the I2S driver in the ESP32 requires 16-bit samples to be transmitted to it, but sends only the high byte to the 8-bit DAC. Therefore, we need to take our 8-bit sound and paste it into a twice as large buffer, while the buffer will be half empty. Then we pass it to the I2S driver and it passes the DAC the high byte of each sample. Unfortunately, this means that we have to “pay” for 16 bits, but we can only use 8 bits.

Multitasking


Unfortunately, the game cannot work on one core, as I originally wanted, because there seems to be a bug in the I2S driver.

The I2S driver must use DMA (like the SPI driver), that is, we could just initiate the transfer of I2S, and then continue our work while the I2S driver is transmitting audio data.

But instead, the CPU is blocked for the duration of the sound, which is completely unsuitable for the game. Imagine that you press the jump button, and then the player’s sprite pauses its movement for 100 ms while the jump sound is playing.

To solve this problem, we can take advantage of the fact that there are two cores on board the ESP32. We can create a task (i.e. a thread) in the second core, which will deal with sound reproduction. Thanks to this, we can transfer the pointer to the sound buffer from the main task of the game to the sound task, and the sound task initiates the transfer of I2S and is blocked for the duration of the sound playback. But the main task on the first core (with input processing and rendering) will continue to execute without blocking.

Initialization


Knowing this, we can properly initiate the I2S driver. To do this, you need only a few lines of code, but the difficulty is to find out what parameters you need to set for proper sound reproduction.

static const gpio_num_t AUDIO_AMP_SD_PIN = GPIO_NUM_25;

static QueueHandle_t gQueue;

static void PlayTask(void *arg)
{
	for(;;)
	{
		QueueData data;

		if (xQueueReceive(gQueue, &data, 10))
		{
			size_t bytesWritten;
			i2s_write(I2S_NUM_0, data.buffer, data.length, &bytesWritten, portMAX_DELAY);
			i2s_zero_dma_buffer(I2S_NUM_0);
		}

		vTaskDelay(1 / portTICK_PERIOD_MS);
	}
}

void Odroid_InitializeAudio(void)
{
	// Configure the amplifier shutdown signal
	{
		gpio_config_t gpioConfig = {};

		gpioConfig.mode = GPIO_MODE_OUTPUT;
		gpioConfig.pin_bit_mask = 1ULL << AUDIO_AMP_SD_PIN;

		ESP_ERROR_CHECK(gpio_config(&gpioConfig));

		gpio_set_level(AUDIO_AMP_SD_PIN, 1);
	}

	// Configure the I2S driver
	{
		i2s_config_t i2sConfig= {};

		i2sConfig.mode = I2S_MODE_MASTER | I2S_MODE_TX | I2S_MODE_DAC_BUILT_IN;
		i2sConfig.sample_rate = 5012;
		i2sConfig.bits_per_sample = I2S_BITS_PER_SAMPLE_16BIT;
		i2sConfig.communication_format = I2S_COMM_FORMAT_I2S_MSB;
		i2sConfig.channel_format = I2S_CHANNEL_FMT_ONLY_LEFT;
		i2sConfig.dma_buf_count = 8;
		i2sConfig.dma_buf_len = 64;

		ESP_ERROR_CHECK(i2s_driver_install(I2S_NUM_0, &i2sConfig, 0, NULL));
		ESP_ERROR_CHECK(i2s_set_dac_mode(I2S_DAC_CHANNEL_LEFT_EN));
	}

	// Create task for playing sounds so that our main task isn't blocked
	{
		gQueue = xQueueCreate(1, sizeof(QueueData));
		assert(gQueue);

		BaseType_t result = xTaskCreatePinnedToCore(&PlayTask, "I2S Task", 1024, NULL, 5, NULL, 1);
		assert(result == pdPASS);
	}
}

First, we configure IO25 (which is connected to the amplifier’s turn-off signal) as an output so that it can control the sound amplifier, and apply a high signal to it to turn on the amplifier.

Next, we configure and install the I2S driver itself. I will parse each part of the configuration line by line, because each of the lines requires explanation:

  • mode
    • we set the driver as a master (controlling the bus), a transmitter (because we transfer data to the recipients), and configure it to use the built-in 8-bit DAC (because the Odroid Go board does not have an external DAC).
  • sample_rate
    • 5012, , , . , , . -, 2500 .
  • bits_per_sample
    • , ESP32 8-, I2S , 16 , 8 .
  • communication_format
    • , , - , 8- 16- .
  • channel_format
    • GPIO, IN+ — IO26, «» I2S. , I2S , IO25, , .
  • dma_buf_count dma_buf_len
    • DMA- ( ) , , , IDF. , .

Then we create a queue - this is the way FreeRTOS sends data between tasks. We put data in the queue of one task and extract it from the queue of another task. Create a struct called QueueData that combines the pointer to the sound buffer and the length of the buffer into a single structure that can be queued.

Next, create a task that runs on the second core. We connect it to the PlayTask function , which performs sound playback. The task itself is an endless loop that constantly checks to see if there is any data in the queue. If they are, she sends them to the I2S driver so that they can be played. It will block the i2s_write call, and this suits us, because the task is performed on a separate kernel from the main thread of the game.

A call to i2s_zero_dma_buffer is required so that after playback is complete there are no sounds left from the speakers. I do not know if this is a bug of the I2S driver or the expected behavior, but without it, after the sound buffer has finished playing, the speaker emits a garbage signal.

Play sound


void Odroid_PlayAudio(uint16_t* buffer, size_t length)
{
	QueueData data = {};

	data.buffer = buffer;
	data.length = length;

	xQueueSendToBack(gQueue, &data, portMAX_DELAY);
}

Due to the fact that the entire configuration has already been completed, the call to the sound buffer playback function itself is extremely simple, because the main work is done in another task. We put the pointer to the buffer and the length of the buffer into the QueueData structure , and then put it in the queue used by the PlayTask function .

Because of this pattern of operation, one sound buffer must complete playback before it can start the second buffer. Therefore, if a jump and shooting occur simultaneously, the first sound will be played before the second, and not simultaneously with it.

Most likely, in the future I will mix different frame sounds into the sound buffer that is transmitted to the I2S driver. This will allow you to play multiple sounds at the same time.

Demo


We will generate our own sound effects using jsfxr , a tool specifically designed to generate the type of game sounds we need. We can directly set the sampling frequency and bit depth, and then output the WAV file.

I created a simple jump sound effect that resembles the sound of Mario's jump. It has a sampling frequency of 5012 (as we configured during initialization) and a bit depth of 8 (because the DAC is 8-bit).


Instead of parsing the WAV file directly in the code, we will do something similar to what we did to load the sprite in the demo of Part 4: we will remove the WAV header from the file using the hex editor. Thanks to this, the file read from the SD card will be only raw data. Also, we will not read the duration of the sound, we will write it in the code. In the future, we will load sound resources differently, but this is enough for the demo.

The raw file can be downloaded from here .

// Load sound effect
uint16_t* soundBuffer;
int soundEffectLength = 1441;
{
	FILE* soundFile = fopen("/sdcard/jump", "r");
	assert(soundFile);

	uint8_t* soundEffect = malloc(soundEffectLength);
	assert(soundEffect);

	soundBuffer = malloc(soundEffectLength*2);
	assert(soundBuffer);

	fread(soundEffect, soundEffectLength, 1, soundFile);

    for (int i = 0; i < soundEffectLength; ++i)
    {
        // 16 bits required but only MSB is actually sent to the DAC
        soundBuffer[i] = (soundEffect[i] << 8u);
    }
}

We load the 8-bit data into the 8-bit soundEffect buffer , and then copy this data into the 16-bit soundBuffer buffer , where the data will be stored in the high eight bits. I repeat - this is necessary because of the features of the IDF implementation.

Having created a 16-bit buffer, we can play the sound of a click of a button. It would be logical to use the volume button for this.

int lastState = 0;

for (;;)
{
	[...]

	int thisState = input.volume;

	if ((thisState == 1) && (thisState != lastState))
	{
		Odroid_PlayAudio(soundBuffer, soundEffectLength*2);
	}

	lastState = thisState;

	[...]
}

We monitor the state of the button so that accidentally, with one click of the button, Odroid_PlayAudio is not accidentally called several times.


Source


All source code is here .

References



All Articles