Extending the Generic API For Render Control :
Author: Jeff Rollason and Daniel Orme

Copyright Notice: This article is Copyright AI Factory Ltd. Ideas and code belonging to AI Factory may only be used with the direct written permission of AI Factory Ltd.

Previous articles, including the "Puzzle Workbench", "Intelligent Diagnostic Tracing" and "Developing Biological Emulation Using a Generic Turn-based Testbed", described a generic game engine development system that defined a universal interface to control any game. This interface considered all logical aspects of gameplay and game control, and was platform independent, allowing the same game engine to be slotted into any GUI on any device.

This article looks at an extension to the above; allowing the game engine to get involved in render control and real-time processing. This is still "generic", but moves the API boundary so that the testbed can be extended to generically control graphical output in real time.

The Value of Having Generic Real-Time Render Control

Our existing well-used system has allowed our game engines to be ported to a wide variety of game systems, including Microsoft's MSN chess, In flight games, DS, PSP, Xbox360, PC and other consoles. Our in-house generic testbed has allowed these games to be well-honed into robust game engines. This has allowed such games to be easily developed and tested, well before committing to any new application platform.

However, although turn-based games can be totally analysed under such a system, there are many games, in particular real-time games, which are more naturally developed with a full real-time development environment. A compromise here was our Biological emulation of the fish tank (see our current news movie). This highly visual real-time system still worked well within the testbed as it was mostly a closed system. It depended on real-time sequencing, but mostly did not need to consider many external UI-driven external events. It had a bespoke interface for it to drive a graphical add-on so that the testbed could drive animation.


However many much simpler games, for example the very simple game Pong (above), depend primarily on UI input to control positioning of the paddle (bat) in response to the movement of the ball. Although our system could support this, it would need another bespoke graphical interface to allow this to be fully independently developed.

Requirements for Generic Render Control

Considering the very simple game of Pong, we can look at the core requirements needed:

1. Passing real-time input to the core Pong game engine
2. Passing real-time render and audio information back to GUI

Having this controlled in "real-time" needs a framework to represent it. Unlike the generic turn-based system, also used for the Aquarium, we must have a generic way of handling real time. We ideally do not want the game engine to have to monitor this, as it would introduce platform dependency inside the core game engine. Ideally the external GUI would instruct the game engine when time had elapsed so that the game state could be progressed. Since we do not want the game engine to control time, we also do not want the game engine to try and directly drive graphical or audio output. A better system would be for the external GUI to be able to interrogate the game engine to find out what needs to be rendered and what sounds need to be played.

Our Adopted System

Basically our system provides 4 core additional interface calls to our existing API, as follows:

TGameResult Fb_TimeStep (int aMsecs) ;
TBool Fb_SetButton(const TButton aButton, const TBool aState)
TRenderList* Fb_GetRenderList()
TAudioList* Fb_GetAudioList()

Fb_TimeStep is called by the GUI to instruct the engine of the absolute time in milliseconds. This is used to track passing relative time, which the engine can use to update the game state. In the case of Pong, this would be to move the ball. It returns an enumerated type, which will be game specific, to indicate the current game state. This could be one of several different type of game state. In the case of Pong, there is mostly just one state: that of "ball in play", but this might include "configuration" or "end of game" stages. Of course if this function is called at slightly irregular time intervals, it may not be convenient for the game engine, which would prefer to calculate events against some fixed clock interval, rather than calculate fractional timed events. To accommodate this the function can wrap this to simply log accumulated time and then not trigger a new "update game state" until a fixed time interval had elapsed. In practice, this is the best plan.

Fb_SetButton is used to indicate that the user interface has delivered some input. This can be handled in more than one way, but ideally is used to cyclically return the status of all control inputs, so reporting both button pressed and button released. Where an input has more than on-off status, an extra parameter can be added or a separate member function provided.

Fb_GetRenderList is used to provide a list of bitmaps to be rendered. TRenderList is a Class object with a count of bitmaps to be rendered and an array of TRenderObject Class objects to be rendered. The latter defines the TGA number, X/Y offsets into TGA, width and height of bitmap and render conditions (alpha, additive etc..). Note that these are provided in "render-order" as inevitably bitmaps may overlap other bitmaps. As is common practice, a single TGA will contain many bitmap images, which are combined into a single bitmap to allow bitmaps to be efficiently handled. This RenderList also has an option to render text, within a defined text box.

Fb_GetAudioList is structured as Fb_GetRenderList, providing the number of audio items to be played and the list of these. These provide the SFX number and any addition information for playing. The interface also provides option for stopping existing SFX in play.

The latter two functions also require initialisation member functions Fb_GetTGAResourceList() and Fb_GetSFXResourceList() to provide a list of TGA and SFX resource filenames. Thereafter these resources are references by TGA and SFX resource numbers.

The Architecture in use for Pong

Considering the simple game Pong above, the requirements are pretty simple. Pong would require bitmaps for just the paddle (bat) and the ball, and net. In reality these would be overlapped so that the ball is just a subset of the paddle, and the fence probably a subset of the ball. The fence could be rendered with each dotted item as a separate render item. Conversely the entire fence could be represented by one long bitmap.

Paddle input would also be simple. Each time a paddle input is made the Pong game engine could simply update the position of the paddle.

The GUI side of Pong might be just on the lines of the following simple loop:

While (TRUE) {
// respond to input
if ( paddle up pressed ) {
Fb_SetButton( EUpPaddle, TRUE);
if ( paddle down pressed ) {
Fb_SetButton( EDownPaddle, TRUE);
// Flag elapsed time
iState = Fb_TimeStep(millisecs);
if ( iState==EGameOver ) {
// get render info
iRenderList = Fb_GetRenderList()
// render list
for ( i=0 ; i<iRenderList.iNumber ; ++I ) {
// render i'th item
// get audio list
iAudioList = Fb_GetAudioList();
for .. (process audio list)
// other processing

The code above is over-simplified, but reflects the core structure of the interface.

The Architecture for more Complex Games

The above Pong example is simple, but actually is essentially similar to what would be needed by any game. If a more complex game, with set-up options and multiple different game stages were required, the interface would be essentially similar. The key here is that the driving interface need not know whether the game is currently allowing the player to step through menu options or is actively playing the game. This is entirely irrelevant for the GUI, which only knows about what is to be rendered, what inputs have occurred and what audio is to be triggered. In that respect the GUI has no idea what is happening. The exception to this is the returned TGameResult value, which the interface can use to trigger specific events. The primary one of these is "game over", but might also include a "save game" event or to pass data to some parent game. It might even be to pass on networked information.

Applications for this Architecture - Simple games and Embedded Minigames

Of course the obvious candidate is to simply develop simple standalone games, such as a graphical puzzle game. Our extended testbed allows the complete game to be developed, with not just game logic but a generic rendering and audio system to fully represent the game. Since the testbed render control is generic, the same architecture can be used for pretty well any game, or even an editor or any simple application.

However this format is also ideal for embedding minigames inside other larger games. A single simple driving interface for the game means the main application needs to do very little to support this.


This moves the generic game logic architecture from providing a specialised game component to providing a complete game entity. The requirement for taking a game developed on this system to some new game console requires little more than the game loop shown above. This same wrapping game loop can then be re-cycled to support any number of games, without needing any significant effort to support. This is an ideal way to move games between multiple game platforms.

Of course this also provides an easy means of embedding minigames in other larger games, with the advantage that the minigames can be totally developed outside the main game.

This system has already been used to provide some 70 bespoke minigames.

Jeff Rollason and Daniel Orme: December 2008