Foreword
In the previous post we have implemented sprites within our Android C64 emulator.The addition of Sprites made the game DAN DARE fully playable within our Android C64 emulator.
The only thing left in these series of posts is adding sound.
I must admit that I was a bit lazy when busy with this post :-) Working on SID emulation from scratch can appear as quite a mountain to climb, especially if you are near the end of writing your C64 emulator.
To make the task bearable, I have decided to peek a bit at how other open source emulators have implemented SID emulation, use their code and modify it to fit within my emulator.
The emulator from which I have "peeked" from was Frodo C64. This is a portable C64 emulator that runs on a number of platforms, including Windows, Linux, AmigaOS and others.
There is even an Android fork of Frodo 64 on GitHub, here, by Arnaud Brochard. This fork, by the way, will be the source I will be focussing on the SID functionality in this post and how to make it fit within our emulator.
Frodo C64 SID emulation
Let us have a quick overview on how SID emulation is implemented within Frodo C64.As discussed in the foreword, we are going to examine the fork from Arnaud Brochard.
One thing to take take note of is that the Frodo C64 source code is written in C++. Thus, to make the Frodo C64 source code fit within the C source code of my emulator, I had to do some tinkering.
In an attempt to stay focussed within this post, I am not going to discuss what tinkering was required for C->C++ conversion.
However, just keep in mind the C->C++ conversion when comparing my source code with FrodoC64 concerning SID emulation.
The two key source code files that we will be using from Frodo C64 for SID emulation is SID.cpp and SID_android.h.
Of course, there is other source code files providing JNI-glue providing our native code with the capability to access the Java-sound system. I will, however, discuss this JNI glue in the next section on integrating the Frodo SID emulation within our emulator.
Let us start having a look at the file SID.cpp.
The first function of importance within SID.cpp is calc_buffer. The purpose of this function is to provide an output buffer with audio samples that can be played on the Android sound system.
The calc_buffer function is the key function for SID emulation. It loops through all the SID voices and renders the applicable waveform samples for the applicable waveform. This function is also responsible for applying an envelope to the waveform with supplied ADSR parameter.
Let us have a quick look at how the samples is created for the different waveforms.
Here is some snippets from the waveform switch for getting a sample for the applicable waveform:
... case WAVE_TRI: if (v->ring) output = TriTable[(v->count ^ (v->mod_by->count & 0x800000)) >> 11]; else output = TriTable[v->count >> 11]; break; case WAVE_SAW: output = v->count >> 8; break; case WAVE_RECT: if (v->count > (uint32)(v->pw << 12)) output = 0xffff; else output = 0; break; ... case WAVE_NOISE: if (v->count > 0x100000) { output = v->noise = sid_random() << 8; v->count &= 0xfffff; } else output = v->noise; break; ...
As you can see all waveforms calculate the output samples except the Triangular waveform, which uses a lookup table.I presume the lookup table for the triangle waveform was probably implemented for performance reasons.
The same triangle lookup table is used used to get triangle waveforms of different frequencies. To get a triangle waveform for the desired frequency you just need to skip a number of samples each time within the lookup table.
The same principle of sample skipping also applies for the other waveforms to get a waveform for the desired frequency.
Those familiar with the SID will recall that it allowed more than one waveform to be enabled per voice. This produced unintuitive results.
Frodo C64 also emulates these scenarios where more than one waveform is enabled per voice. Frodo, however, also uses a lookup table to get the samples for these combined waveforms. From the comments it looks like samples was retrieved by recording the output of a physical SID with the applicable waveforms combined.
Another method of importance within SID.cpp is WriteRegister. This is the method you will use to delegate CPU writes to the SID memory region. calc_buffer ultimately also uses these values passed via WriteRegister to do the required rendering.
When integrating with our emulator, we will also be calling WriteRegister from memory.c when we encounter a write to the SID memory region. More on this in the next section.
This concludes our discussion on SID.cpp.
Let us now move on to SID_android.h.
This header file has some Android specifics regarding SID.
A method I want to highlight within this file is EmulateLine. Each time we are finished rendering a line on the display we need to invoke this method. Within this method
Each time EmulateLine is called a check is done whether enough line periods has elapsed, from which the total is enough to fill an audio buffer.
If the total period is long enough, we call calc_buffer and send the resulting output to the Java Sound System.
One might ask at this point in time how big the audio buffer should be made.
The following comment within SID_android.h gives us clue:
Note that too large buffers will not workThe resulting size of the audio buffer that Frodo uses is 512 samples.
very well: The speed of the C64 is slowed down to an average speed of
100% by the blocking write() call in EmulateLine(). If you use a buffer
of, say 4096 bytes, that will happen only about every 4 frames, which
means that the emulation runs much faster in some frames, and much
slower in others.
On really fast machines, it might make sense to use an even smaller
buffer size.
A sample buffer size of 512 samples translates to a period of more or less one VIC-II frame.
So, should it happen that you change a particular SID parameter a umber of times during the period of a VIC-II frame, only the last value will be use by calc_buffer.
Integration
I am now going to discuss how to integrate the Frodo SID functionality within our emulator.First, we need to ad the following methods within FrontActivity:
public void initAudio(int freq, int bits, int sound_packet_length) { if (audio == null) { audio = new AudioTrack(AudioManager.STREAM_MUSIC, freq, AudioFormat.CHANNEL_CONFIGURATION_MONO, bits == 8?AudioFormat.ENCODING_PCM_8BIT: AudioFormat.ENCODING_PCM_16BIT, freq==44100?32*1024:16*1024, AudioTrack.MODE_STREAM); audio.play(); } } public void sendAudio(short data []) { if (audio != null) { long startTime = System.currentTimeMillis(); audio.write(data, 0, data.length); long endTime = System.currentTimeMillis(); System.out.println(endTime - startTime); } }
Frodo C64 contains similar java methods, but in a different place. In our case we do it within our MainActivity.
Within initAdio we create a global AudioTrack object called audio.
An AudioTrack object can accept samples via write(), which it will then play on the speakerof your Android device.
The sendAudio method receives a buffer of samples to output as sound. We wll invoke senAudio when we got a set of samples from cal_buffer.
Next up, we will be discussing the JNI glue required so our native code codecan communicate with the Java layer in order to send sound samples to play.
Up to know we have invoked native methods from Java code quite a number of times.
We have, however, not actually did the reverse, that is invoking Java methods from native code.
There is a bit of a headache involved when you want to invoke a Java method from native code: You need to have a JNIEnv instance.
A JNIEnv instance get explicitly send as the first function parameter when you invoke a native method from Java. One might therefore be tempted to think that one can just invoke a native method from Java during initialization and within this native method just store the JNIEnv instance as a global variable.
This approach has a limitation. A JNIEnv instance is only applicable to a particular thread.
Our application have two threads. One is the main GUI thread and the other one is the GLRenderer thread, which we will also use to generate SID sound samples.
So, if we set a global JNIEnv instance when our FrontActivity initialises, our GLRenderer thread will not able to use this instance.
All hope, however, is not lost. We can still get a valid JNIEnv instance with the help of a JavaVM instance variable. A JavaVm instance, by the way, can be used by all threads running within the virtual machine.
Firstly, we need to get a instance of JavaVM in native code as follows:
jint JNI_OnLoad(JavaVM* aVm, void* aReserved) { gJavaVM = aVm; return JNI_VERSION_1_6; }
The Virtual machine will invoke the JNI_Onload method within your native library when the virtual machine loads your native library.
This method can be in any of your c files, it doesn't matter. However, just ensure that you define this method once within your native code.
We can now get a JNIEnv variable that is applicable to our current thread that does the sound rendering.
The place I thought of placing this functionality was within EmulateLine of SID_android.h:
void EmulateLine() { ... if (global_env == NULL) { (*gJavaVM)->AttachCurrentThread (gJavaVM, &global_env, NULL); } ... }
This assumes you have defined global_env as a global variable within your native code.
We need to define a couple more JNI hooks. So, within memory.c define the following:
... jobject currentActivity; jmethodID initAudio; jmethodID sendAudio; ... void Java_com_johan_emulator_engine_Emu6502_setMainActivityObject(JNIEnv* env, jobject pObj, jobject activity) { currentActivity = (*env)->NewGlobalRef(env,activity); jclass thisClass = (*env)->GetObjectClass(env,currentActivity); initAudio = (*env)->GetMethodID(env, thisClass, "initAudio", "(III)V"); sendAudio = (*env)->GetMethodID(env, thisClass, "sendAudio", "([S)V"); }
We will invoke this method on the onCreate in our FrontActivity and pass itself as a reference.
You will also see something interesting when we store this reference in our native method: We get a Global reference and then store this reference.
It should be remembered that object references passed to native methods from Java are local reference which only have a lifetime while the native method executes. Once the native method exits, this local reference isn't valid any more.
Let s continue with the rest of the code within this native method. We get the class for our instance and from that we get the Method IDS for initAdio and sendAudio which we store.
Note that MethodID's you can store and use gobally and don't need to worry about local/global references.
We now have all the important JNI hooks defined. Let us now see where it is used. All this happens within EmulateLine of SID_android.h. To get the overall picture, I am showing you the whole method:
void EmulateLine() { static int divisor = 0; static int to_output = 0; static int buffer_pos = 0; static int loop_n = 2; static int loop_c = 0; if (!ready) return; if (global_env == NULL) { (*gJavaVM)->AttachCurrentThread (gJavaVM, &global_env, NULL); } sample_buf[sample_in_ptr] = volume; sample_in_ptr = (sample_in_ptr + 1) % SAMPLE_BUF_SIZE; // Now see how many samples have to be added for this line divisor += SAMPLE_FREQ; while (divisor >= 0) divisor -= 312*50, to_output++; // Calculate the sound data only when we have enough to fill // the buffer entirely if ((buffer_pos + to_output) >= sndbufsize) { int datalen = sndbufsize - buffer_pos; to_output -= datalen; calc_buffer(sound_buffer + buffer_pos, datalen*2); if (!audioarray) { jshortArray tempArrayRef = (*global_env)->NewShortArray(global_env, sndbufsize*loop_n); audioarray = (*global_env)->NewGlobalRef(global_env,tempArrayRef); } (*global_env)->SetShortArrayRegion(global_env, audioarray, loop_c*sndbufsize, sndbufsize, sound_buffer); loop_c++; if (loop_c == loop_n) { (*global_env)->CallVoidMethod(global_env, currentActivity, sendAudio, audioarray); loop_c = 0; } buffer_pos = 0; } }
The bottom line of the highlighted code is to call the Java method sendAudio so that the produce sound samples can be send to the Java Sound system.
We have, however, a problem with the datatype of the native array we use to store the samples. We cannot just send it the Java method sendAudio as is.
Arrays in Java are also a Java object, so we nee to change a native array to a Java object before passing it to sendAudio.
We do this by first defining a Java array audioarray as jshortArray. We then call SetShortArrayRegion to copy the data between our native array and our Java array.
This concludes our Integration discussion.
A Test Run
During a Test Run the sound played smoothly on my Android device.There was, however, a side effect with the video output. Every now and again, a frame would freeze for a while.
Some closer investigation yielded that the issue was caused with one of the parameters passed when we create an AudioTrack instance.
The offending parameter was the second last one, specifying underlying buffer space. Value specified was 32kB.
In effect what was happening was while buffer still had space left, it quickly accepts 512 samples. The process continues till the buffer eventually fill up completely.
When the buffer is full, the write call on AudioTrack blocks until a certain portion of the samples has played making space for new samples.
This blocking caused the jerky graphics.
I resolved this issue by reducing the underlying buffer size to just double the size of the buffer returned by calc_buffer, so we are effectively double buffering.
In Summary
In this post we have implemented SID emulation within our emulator.This concludes my series on writing a C64 emulator for Android.
No comments:
Post a Comment