Friday, 30 December 2016

Part 17: Implementing Sprites

Foreword

In the previous post we have implemented the remaining graphic modes required by the game Dan Dare.

One thing, though, we haven't implemented for the game Dan Dare is Sprites.

Sprites on the C64 have some complexity of its own. A sprite can contain transparent pixels and either be shown behind text or in front of text.

These complexities of sprites can cause a couple of headaches when trying to implement within our emulator.

Luckily most Android devices comes shipped with a GPU that can relieve some of these complexities for us. For example, GPUs support a functionality called Alpha Blending that makes it easy for us to implement transparency.

Android surface functionality of the GPU via OpenGL ES. Therefore in this post I will also be talking a bit about OpenGL ES within the context of Android.

Finally, we will be implementing Sprites within our emulator using OpenGL ES.

Overview of OpenGL ES

OpenGL ES can be viewed as a branch of OpenGL.

OpenGL is an open standard for accessing GPU hardware in a standard way.

OpenGL ES also stems from OpenGL, but is optimised for mobile/embedded devices which have limitations on available memory, CPU and in general try to optimise battery life.

Implementing Stacked Rendering

To implement Spite rendering I will be using the same approach as in my JavaScript C64 emulator, here.

The approach I used in my JavaScript C64 emulator involved stacking a number of canvases on top of each other. I used the following canvasses:

  • Background
  • Back ground Sprites
  • Foreground
  • Foreground Sprites 

Each canvas has a Z-order attribute telling the renderer in which order the canvases should be stacked.

We can use the same approach in our Android C64 emulator with the help of OpenGL ES.

Each layer we draw as two textured triangles in a way that it forms a rectangle. To specify the stacking order we will be using different z-ordinate values for the different planes.

Apart from specifying the correct z-order for the stacking, it is also important to draw the planes in the correct order, that is, start with the Background layer and work your way through till you get to the Foreground Sprites layer. If you don't get your order right, alpha blending will not produce the desired effect, that is your foreground will appear as the background and vice versa.

We will now go into implementation details for stacking.

Defining a new Surface

Up to now, we have used a SurfaceView to display the output of our emulator on the screen. However, since we want to go the OpenGL ES route, this will need to change.

We need to define a GLSurfaceView that OpenGL can use to draw to the screen. So, within content_front.xml we need to make the following change:

...
    <android.opengl.GLSurfaceView
        android:id="@+id/Video"
        android:layout_width="match_parent"
        android:layout_height="100px" />
...

Next, we need to add a renderer to the GLSurfaceView. So, within FrontActivity.java we need to make the following changes:

...
    @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.activity_front);

        GLSurfaceView mGLSurfaceView = (GLSurfaceView) findViewById(R.id.Video);

        mGLSurfaceView.setEGLContextClientVersion(2);
        MyGL20Renderer myRenderer = new MyGL20Renderer(this);
        mGLSurfaceView.setRenderer(myRenderer);
...
     }
...

We make a call to setEGLContextClientVersion to set the OpenGL version to 2.

MyGL20Renderer is a class we need to define that will perform the necessary drawing. The following important methods are defined within this class:

    public void onDrawFrame(GL10 unused)
    {
        GLES20.glDisable(GLES20.GL_DEPTH_TEST);

        GLES20.glEnable(GLES20.GL_BLEND);
        GLES20.glBlendFunc(GLES20.GL_SRC_ALPHA, GLES20.GL_ONE_MINUS_SRC_ALPHA);

        GLES20.glDepthMask(false);
        GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);

        //Set the camera position (View Matrix)
        Matrix.setLookAtM(mVMatrix, 0, 0, 0, 3, 0f, 0f, 0f, 0f, 1.0f, 0.0f);

        //Calculate the projection and view transformation
        Matrix.multiplyMM(mMVPMatrix, 0, mProjMatrix, 0, mVMatrix, 0);

        //Create a rotation transformation for the triangle
        Matrix.setRotateM(mRotationMatrix, 0, mAngle, 0, 0, -1.0f);

        //Combine the rotation matrix with the projection and camera view
        Matrix.multiplyMM(mMVPMatrix, 0, mRotationMatrix, 0, mMVPMatrix, 0);
        GLES20.glEnable(GLES20.GL_BLEND);

        emuInstance.clearDisplayBuffer();
        emuInstance.runBatch(0);
        sprite.Draw(mMVPMatrix, byteBuffer);
    }

    public void onSurfaceChanged(GL10 unused, int width, int height)
    {
        GLES20.glViewport(0, 0, width, height);

        float ratio = (float) width / height;

        Matrix.orthoM(mProjMatrix,0, -ratio, ratio, -1, 1, 3, 7);
    }


The onSurfaceChanged method gets invoked when the surface gets created and when the dimensions of the screen changed, like when turning the device from a portrait orientation to a landscape orientation.

In onSurfaceChanged we apply an Orthographic Projection Matrix. An Orthographic Projection Matrix is in contrast with a Perspective Projection Matrix. The latter matrix makes distant objects looks smaller whereas the former everything looks the same size irrespective of distance.

Strictly speaking don't need to apply a matrix for Orthographic projection. We, however, need to apply a matrix to get the aspect ratio right. Without it the graphics will look squashed.

There is a lot of things happening within the method onDrawFrame. Before we start to discuss what is going on within this method, it may be worthwhile to mention that this method gets invoked by the OpenGL subsystem. So we don't really have a concrete way to control the framerate of our application, except maybe if you add random sleeps to slow down the app to the required rate. I will, however not cover speed control in this post.

Back to the detail of onDrawFrame. In the first couple of lines we enable alpha blending. You will also see that we disable Depth testing, since it doesn't make a difference in our world.

We also invoke setLookAtM to change the orientation of the camera to three units in front of our layers.

You will also see that we now invoke runBatch from this method.

The actual drawing happens within sprite.Draw(). Sprite is also one of the new classes we have created. There is lot of things happening in this class, so I will try and give a summary on what happens within this class.

Texture Rendering

As explained previously, we will draw the screen as 4 different layers with Alpha blending.

Each layer will be drawn as a rectangle filled with a texture. Theoretically this means that we need to supply four textures. It should be noted, though, that a bit of overhead is associated with each texture transfer.

To reduce the overhead a bit, we can combined the four textures into one large texture. As an example, look at the following:



The first picture is the resulting picture. The next picture shows the combined textured image. I have used the color magenta to indicate the transparent pixels.

When drawing the different layers, you just need to ensure that the texture coordinates takes the portion of the combined texture that is applicable to the layer. To get a feel for the texture coordinates, have a look at the following definition:

        final float[] cubeTextureCoordinateData =
                {
                        //Sprite Foreground
                        0.0f, 0.0f, // top left
                        0.0f, 1.0f, // bottom left
                        0.258426966f, 1.0f, // bottom right
                        0.258426966f, 0.0f, //top right

                        //Foreground
                        0.258426966f, 0.0f, // top left
                        0.258426966f, 1.0f, // bottom left
                        0.516853932f, 1.0f, // bottom right
                        0.516853932f, 0.0f, //top right

                        //Sprite Background
                        0.516853932f, 0.0f, // top left
                        0.516853932f, 1.0f, // bottom left
                        0.775280898f, 1.0f, // bottom right
                        0.775280898f, 0.0f, //top right


                        //Background
                        0.775280898f, 0.166666f, // top left
                        0.775280898f, 0.833333f, // bottom left
                        1.0f, 0.833333f, // bottom right
                        1.0f, 0.166666f //top right
                };

Texture coordinates are in the range 0-1 left to right, as well as top to down.

Going Native

Time for us to modify our native code for drawing sprites.

Up to now our native code wrote the video output to a native buffer with size 368 by 300 pixels.

However, as seen in the previous section, we will need to have a larger buffer containing a combination of all four layers. Therefore, the dimensions of  our native buffer will need to change to 1424x300. This dimension change require the following change within FrontActivity.java:

...
    @Override
    protected void onCreate(Bundle savedInstanceState) {
...
       mByteBuffer = ByteBuffer.allocateDirect(
                (368 + //foreground sprite
                 368 + //front
                 368 + //backround sprite
                 320 //background

                 ) * (300) * 4);
...
    }
...

Now, to the native code.

We start off by adding the following definitions to video.c:

...
#define STRIDE 368 + 368 + 368 + 320

jchar colors_RGB_888[16][3] = {
{0, 0, 0},
                  {255, 255, 255},
                  {136, 0, 0},
                  {170, 255, 238},
                  {204, 68, 204},
                  {0, 204, 85},
                  {0, 0, 170},
                  {238, 238, 119},
                  {221, 136, 85},
                  {102, 68, 0},
                  {255, 119, 119},
                  {51, 51, 51},
                  {119, 119, 119},
                  {170, 255, 102},
                  {0, 136, 255},
                  {187, 187, 187}
};

jchar colors_RGB_565[16];

jint colors_RGB_8888[16];

void initialise_video() {
  int i;
...
  for (i=0; i < 16; i++) {
    colors_RGB_8888[i] = (255 << 24) | (colors_RGB_888[i][2] << 16) | (colors_RGB_888[i][1] << 8) | (colors_RGB_888[i][0] << 0);
    //colors_RGB_8888[i] = (255);
  }
}
...

We begin with a definition of stride. This is the full pixel length of a line. We use this constant to advance line by line. Off course, at any point in time we will only be working with a subsection of a line, depending with which layer we are busy with.

We also define a new color tablet colors_RGB_8888. This is basically our color tablet with a color alpha channel for each color. When we populate this array within initialise video, we set the alpha channel for each color to fully opaque so that we don't need to worry about this later on.

Next, we write the following code to keep track of the different positions within the buffer:

...
int startOfLineTxtBuffer = 0;
int startOfFrontSpriteBuffer = 0;
int startOfBackgroundSpriteBuffer = 0;
int posInFrontBuffer = 0;
int posInBackgroundBuffer = 0;
...
static inline void processLine() {
  if (line_count > 299)
    return;

  posInFrontBuffer = startOfLineTxtBuffer + 368;
  posInBackgroundBuffer = startOfLineTxtBuffer + 368 + 368 + 368;

  startOfFrontSpriteBuffer = startOfLineTxtBuffer;
  startOfBackgroundSpriteBuffer = startOfLineTxtBuffer + 368 + 368;
...
  startOfLineTxtBuffer = startOfLineTxtBuffer + STRIDE;
}

Just for clarity, Txt stands for texture.

The code for the graphic modes will not change a lot except for variable name changes. The graphic modes will now need to use and update the variable names posInFrontBuffer and posInBackgroundBuffer.

Let us now move on to the Sprite code.

Part of the process of drawing a sprite involves a lot of back and forth between IO memory locations. I have therefore decided to create a method within memory.c that will do all the backwards and forwards between IO locations and return the necessary info as a data structure. The definition of this datastructure is as follows:

struct sprite_data_struct {
    int sprite_data;
    int sprite_type; //bit 1: xExpanded bit 0: multicolor
    int isForegroundSprite;
    int color_tablet[4];
    int sprite_x_pos;
    int number_pixels_to_draw;
};

Here is a description of the different fields:

  • sprite_data: the three bytes of data for the required line within the sprite
  • sprite_type: two bits indicating whether the sprite is xExpanded and multi colored
  • isForegroundSprite: whether the sprite is a foreground or background sprite
  • color_tablet: colors for the different bit combinations. If it is not a multi colored sprite only index one will be populated
  • sprite_pos_x: x Position of sprite
  • number_pixels_to_draw: Either 24 or 48 depending whether sprite is X-Expanded
And now for the implementation of this method within memory.c:

int processSprite(int spriteNum, int lineNumber, struct sprite_data_struct * sprite_data) {
  if (!(IOUnclaimed[0x15] & (1 << spriteNum)))
    return 0;

  int spriteY = IOUnclaimed[(spriteNum << 1) | 1];
  int yExpanded = IOUnclaimed[0x17] & (1 << spriteNum);
  int ySpriteDimension = yExpanded ? 42 : 21;
  int spriteYMax = spriteY + ySpriteDimension;

  if (!((lineNumber >= spriteY) && (lineNumber < spriteYMax)))
    return 0;

  int spriteX = (IOUnclaimed[spriteNum << 1] & 0xff);
  if (IOUnclaimed[0x10] & (1 << spriteNum))
    spriteX = 256 | spriteX;

  if (spriteX > 367)
    return 0;

  int xExpanded = IOUnclaimed[0x1d] & (1 << spriteNum);
  int xSpriteDimension = xExpanded ? 48 : 24;
  int spriteXMax = spriteX + xSpriteDimension;

  if (spriteXMax > 367)
    xSpriteDimension = 368 - spriteX;

  sprite_data->sprite_x_pos = spriteX;
  sprite_data->number_pixels_to_draw = xSpriteDimension;
  sprite_data->sprite_type = 0;
  if (xExpanded)
    sprite_data->sprite_type = sprite_data->sprite_type | 2;

  if (IOUnclaimed[0x1c] & (1 << spriteNum)) {
    sprite_data->sprite_type = sprite_data->sprite_type | 1;
    sprite_data->color_tablet[1] = IOUnclaimed[0x25] & 0xf;
    sprite_data->color_tablet[2] = IOUnclaimed[0x27 + spriteNum] & 0xf;
    sprite_data->color_tablet[3] = IOUnclaimed[0x26] & 0xf;
  } else {
    sprite_data->color_tablet[1] = IOUnclaimed[0x27 + spriteNum] & 0xf;
  }

  if (IOUnclaimed[0x1b] & (1 << spriteNum))
    sprite_data->isForegroundSprite = IOUnclaimed[0x1b] & (1 << spriteNum) ? 0 : 1;

  int memPointer = IOUnclaimed[0x18];
  int spritePointerAddress = memPointer & 0xf0;
  spritePointerAddress = spritePointerAddress << 6;

  spritePointerAddress = ((~IOUnclaimed[0xd00] & 3) << 14) | spritePointerAddress;
  spritePointerAddress = spritePointerAddress + 0x400 -8 + spriteNum;
  int spriteBaseAddress = mainMem[spritePointerAddress] << 6;


  int spriteLineNumber = lineNumber - spriteY;

  if (yExpanded)
    spriteLineNumber = spriteLineNumber >> 1;

  int posInSpriteData = (spriteLineNumber << 1) + (spriteLineNumber) + spriteBaseAddress;
  sprite_data->sprite_data = (mainMem[posInSpriteData + 0] << 16) | (mainMem[posInSpriteData + 1] << 8)
          | (mainMem[posInSpriteData + 2] << 0);

  return 1;

}

As parameters we accept the Sprite Number in question, current line number and a pointer to a sprite_data_struct in which we should return the required data.

We also have a boolean return type indicating whether the current sprite should be drawn. If any kind of reason is found in which the sprite shouldn't be drawn, we immediately return false.

Let us now define the main process flow of processing sprites within video.c:

void processSprites() {
  int i;
  struct sprite_data_struct currentSpriteData;
  for (i = 0; i < 8; i++) {
    int currentSpriteNum = 7 - i;
    if (processSprite(currentSpriteNum, line_count, &currentSpriteData)) {
      spriteFunctions[currentSpriteData.sprite_type] (currentSpriteData);
    }
  }
}

static inline void processLine() {
  if (line_count > 299)
    return;

  posInFrontBuffer = startOfLineTxtBuffer + 368;
  posInBackgroundBuffer = startOfLineTxtBuffer + 368 + 368 + 368;

  startOfFrontSpriteBuffer = startOfLineTxtBuffer;
  startOfBackgroundSpriteBuffer = startOfLineTxtBuffer + 368 + 368;

  updatelineCharPos();
  fillColor(24, memory_unclaimed_io_read(0xd020) & 0xf);
  int screenEnabled = (memory_unclaimed_io_read(0xd011) & 0x10) ? 1 : 0;
  if (screenLineRegion && screenEnabled) {
    processSprites();
...
  }
...
}

For each line that we process we call processSprites in which we loop through all the sprites calling processSprite in turn which we have previously defined within video.c.

We only bother with a sprite if processSprite returned true.

You will see something interesting in the if statement where we test the return value of processSprite. When we enter this if-statement, one of four possible actions is supposed to be called, depending on whether is X-Expanded or multi-coloured.

In the usual case we would use a four case switch statement. However, to save some CPU cycles, we can make use of a four element function pointer array for lookup. We declare and populate this array as follows:

...
void (*spriteFunctions[4]) (struct sprite_data_struct spriteData);

void drawExpandedMulticolorSpriteLine(struct sprite_data_struct spriteData);
void drawUnExpandedMulticolorSpriteLine(struct sprite_data_struct spriteData);
void drawExpandedNormalSpriteLine(struct sprite_data_struct spriteData);
void drawUnExpandedNormalSpriteLine(struct sprite_data_struct spriteData);
...
void initialise_video() {
...
  spriteFunctions[0] = &drawUnExpandedNormalSpriteLine;
  spriteFunctions[1] = &drawUnExpandedMulticolorSpriteLine;
  spriteFunctions[2] = &drawExpandedNormalSpriteLine;
  spriteFunctions[3] = &drawExpandedMulticolorSpriteLine;
}
...


Above sprite functions is defined as follows:

void drawUnExpandedNormalSpriteLine(struct sprite_data_struct currentSpriteData) {
      int currentPosInSpriteBuffer;
      if (currentSpriteData.isForegroundSprite)
        currentPosInSpriteBuffer = startOfFrontSpriteBuffer;
      else
        currentPosInSpriteBuffer = startOfBackgroundSpriteBuffer;
      currentPosInSpriteBuffer = currentPosInSpriteBuffer + currentSpriteData.sprite_x_pos;
      int j;
      int spriteData = currentSpriteData.sprite_data;
      int upperLimit = currentPosInSpriteBuffer + currentSpriteData.number_pixels_to_draw;
      for (j = currentPosInSpriteBuffer; j < (upperLimit); j++) {
        if (spriteData & 0x800000) {
          g_buffer[currentPosInSpriteBuffer] = colors_RGB_8888[currentSpriteData.color_tablet[1]];
        }
        spriteData = (spriteData << 1) & 0xffffff;
        currentPosInSpriteBuffer++;
      }

}

void drawExpandedNormalSpriteLine(struct sprite_data_struct currentSpriteData) {
      int currentPosInSpriteBuffer;
      if (currentSpriteData.isForegroundSprite)
        currentPosInSpriteBuffer = startOfFrontSpriteBuffer;
      else
        currentPosInSpriteBuffer = startOfBackgroundSpriteBuffer;
      currentPosInSpriteBuffer = currentPosInSpriteBuffer + currentSpriteData.sprite_x_pos;
      int j;
      int spriteData = currentSpriteData.sprite_data;
      for (j = 0; j < (currentSpriteData.number_pixels_to_draw >> 1); j++) {
        if (spriteData & 0x800000) {
          g_buffer[currentPosInSpriteBuffer + 0] = colors_RGB_8888[currentSpriteData.color_tablet[1]];
          g_buffer[currentPosInSpriteBuffer + 1] = colors_RGB_8888[currentSpriteData.color_tablet[1]];
        }
        currentPosInSpriteBuffer = currentPosInSpriteBuffer + 2;
        spriteData = (spriteData << 1) & 0xffffff;
      }

}

void drawUnExpandedMulticolorSpriteLine(struct sprite_data_struct currentSpriteData) {
      int currentPosInSpriteBuffer;
      if (currentSpriteData.isForegroundSprite)
        currentPosInSpriteBuffer = startOfFrontSpriteBuffer;
      else
        currentPosInSpriteBuffer = startOfBackgroundSpriteBuffer;
      currentPosInSpriteBuffer = currentPosInSpriteBuffer + currentSpriteData.sprite_x_pos;
      int j;
      int spriteData = currentSpriteData.sprite_data;
      for (j = 0; j < (currentSpriteData.number_pixels_to_draw >> 1); j++) {
        int pixels = (spriteData & 0xC00000) >> 22;
        if (pixels > 0) {
          g_buffer[currentPosInSpriteBuffer + 0] = colors_RGB_8888[currentSpriteData.color_tablet[pixels]];
          g_buffer[currentPosInSpriteBuffer + 1] = colors_RGB_8888[currentSpriteData.color_tablet[pixels]];
        }
        currentPosInSpriteBuffer = currentPosInSpriteBuffer + 2;
        spriteData = (spriteData << 2) & 0xffffff;
      }

}

void drawExpandedMulticolorSpriteLine(struct sprite_data_struct currentSpriteData) {
      int currentPosInSpriteBuffer;
      if (currentSpriteData.isForegroundSprite)
        currentPosInSpriteBuffer = startOfFrontSpriteBuffer;
      else
        currentPosInSpriteBuffer = startOfBackgroundSpriteBuffer;
      currentPosInSpriteBuffer = currentPosInSpriteBuffer + currentSpriteData.sprite_x_pos;
      int j;
      int spriteData = currentSpriteData.sprite_data;
      for (j = 0; j < (currentSpriteData.number_pixels_to_draw >> 2); j++) {
        int pixels = (spriteData & 0xC00000) >> 22;
        if (pixels > 0) {
          g_buffer[currentPosInSpriteBuffer + 0] = colors_RGB_8888[currentSpriteData.color_tablet[pixels]];
          g_buffer[currentPosInSpriteBuffer + 1] = colors_RGB_8888[currentSpriteData.color_tablet[pixels]];
          g_buffer[currentPosInSpriteBuffer + 2] = colors_RGB_8888[currentSpriteData.color_tablet[pixels]];
          g_buffer[currentPosInSpriteBuffer + 3] = colors_RGB_8888[currentSpriteData.color_tablet[pixels]];
        }
        currentPosInSpriteBuffer = currentPosInSpriteBuffer + 4;
        spriteData = (spriteData << 2) & 0xffffff;
      }

}

This more or less conclude the development required for displaying sprites in our emulator.

A Test Run

With all the changes I did a test run.

All the sprite graphics came out pretty much as expected.

As mentioned before, I haven't made any attempt to add delays to slow down to real speed. Despite all the extra functionality added I still experienced a faster than 1.0X speed.

The faster than 1.0X speed gives us some confidence that we have some CPU room left to do other stuff, like implementing SID sound.

Let us end this section off with some screenshots.





In Summary

In this post we have implemented sprites with the help of OpenGL ES to assist with transparency.

Well, I think we almost got to a point of concluding this series on creating an Android C64 emulator. In fact, when I did my JavaScript emulator series, the sprites post was my final post.

However, after some contemplation the last couple of days, I decided to add a couple of more posts regarding adding SID sound.

So, in the next post I am planning to start exploring the SID with the view of implementing sound within our Android C64 emulator.

Till next time!

No comments:

Post a Comment