Elysian Shadows

gyTypes, Audio, iOS and x86 Frameworks

The structures offered by libGyro are currently:

 

GYMatrix4
GYMatrix3
GYVector4
GYVector3
GYVector2
GYWav
GYTexture
GYTextureCoord
GYInput

The linear algebra structures should all be properly aligned by platform thanks to this fuck-ugly code segment:

//Column-Major
typedef float
#if GYRO_PLATFORM == GYRO_DREAMCAST
__attribute__((aligned(32)))
#elif GYRO_PLATFORM == GYRO_WHATEVER
//ALIGN ME
#endif
GYMatrix4[4][4];

[b]Audio API[/b]

I went ahead and created the Audio API for LibGyro2.0. At the moment, it supports .ogg as music and .wav as sound effects. OGGs are "just played," while WAVs are preloaded and stored then played. On the embedded platforms, we really don't have enough space to be buffering multiple OGGs.

The WAV effects are also stored in a managed bank with smart pointers. Reallocating the same WAV will simply result in a shared pointer, and the resource won't be freed until all references have been destroyed (as with the Texture Bank). 

int gyAudInit();
int gyAudUninit();
int gyAudPlayOgg(const char *const filename);
int gyAudStopOgg();
void gyAudPlayWav(const GYWav *const wav);
GYWav *gyAudWavCreate(const char *const filename);
void gyAudWavDestroy(GYWav *const wav);
unsigned int gyAudWavAmount();
int gyAudWavClear();
void gyAudWavWalkthrough();

[b]Video API[/b]

So... It didn't take long for me to change my mind here. Tvspelsfreak bitched that we used too many small function calls for simple rendering, and M_D_K bitched because he wanted to use VBOs (because he's a dick). After spending a bit of time with OpenGL and OpenGLES, I've decided to use a vertex-array style paradigm, because it's WAY more efficient. It's also closer to how every platform's back-end is structured natively. With this redesign, I've taken a shit on all immediate mode-style functions (gyColor, gyTexCoord(), gyVertex()).

The API now uses OpenGL-style vertex arrays for general-purpose rendering:

gyEnableClientState(GL_VERTEX_ARRAY | GL_TEXTURE_COORD_ARRAY);
gyVertexPointer(const unsigned int size, const unsigned int stride, void *vertex);
gyTexCoordPointer(const unsigned int size, const unsigned int stride, void *texCoord);
gyDrawArrays(const GY_PRIMITIVE prim, const unsigned int startIndex, const unsigned int amount);

And offers optimized functions that invoke this back-end for specialized rendering:

void gyVidFastSprite(const unsigned int frame);
//more to come

I'm finally content with the Video API. It 1) is efficient as fuck 2) closely imitates how shit works at the back-end 3) is modeled after VAs in OpenGL, so there is almost no learning curve. 

[b]Dreamcast[/b]

...is like 99.99% done, seriously. All that I need to do is submit strip headers for rendering. I would have done this too, if Tvspelsfreak hadn't reminded me that KOS's hardware sprite headers don't handle color... we'll need some Tvspelsfreak-style manual bitfucking to handle this.

[b]iOS/x86 Joint Framework (glal)[/b]

In an attempt to maximize code reuse and minimize development time, M_D_K and I have opted to combine overlapping portions of the iOS and x86 frameworks into one codebase. This codebase revolves around OpenGL (video) and OpenAL (audio). I'm calling this joint codebase (as it is in the repositories) "glal." 

I began the glal video implementation. It uses OpenGL vertex arrays (of course). One thing to note: MODELVIEW * VERTEX transformation happens on the GPU in this paradigm (whereas its CPU with DC/PSP). This means that before rendering, we need to submit our current matrix from libGyro's matrix stack to OpenGL with glLoadMatrix(). Judging by what I've read about performance of OGLES on iOS/Android, we are going to be muuuch better off handling these on our own with some ARM ASM and their vector units (OpenGL's implementation isn't hardware accelerated). LibGyro's matrix paradigm shines here. :)

[b]x86 (Get the fuck to work, Falco)[/b]

Technically, I finished all of this shit. I just need to get the library building/tested (as with DC). It's still SDL-based. People like MDK can bitch that LibGyro is a layer of abstraction on a layer of abstraction here (and they'd be right), but it's 1) very, very thin and 2) saves SO much time. With SDL, we can have an x86 build that covers OSX/Nix/Win without having to waste time creating windows and gathering input on each. We can go back, separate them, and remove SDL in the future if we are absolutely that anal, but for now, it's totally worth it (at least we aren't using GLUT ;) ).

[b]iOS (Get the fuck to work, MDK)[/b]

Okay, so. I'm handling "glal video" right now. All of the iOS-specific implementation is blank and ready for you:

#include "gyro_system.h" //OBJECTIVE-C-IFY ME! HAVE FUN, M_D_K! :D :D :D /* Needs to take initialization arguments. */ int gySystemInit() {     return 1; } int gySystemUninit() {     return 1; } void gyPrintf(const char *const str, ...) {     va_list argptr;     static char buffer[GYRO_PRINTF_BUFFER_SIZE];     va_start(argptr, str);     vsprintf(buffer, str, argptr);     va_end(argptr);     //printf(str); } unsigned int gySysStatsVram() {     return 0; } unsigned int gySysStatsRam() {     return 0; } 

as is "glal audio." You can handle that if you wish (I've never done OpenAL). I'm sure that the iOS stuff is going to be some fuck-ugly Obj-C glue, so good luck there. Count me the fuck out. :)

A few design considerations I'd like to discuss with you:

1) With OpenAL, we need threading - we can either handle that behind the scenes or offer a libGyro threading API. Either way, I'm opting to use [i]SDL_Threads[/i] with x86 rather than pthreads (I know you're going to bitch about this), because that's BUILT INTO SDL. It's a waste to add another dependency and not use SDL here... You can use pthreads on iOS/Android or whatever if you wish.  :mrgreen: 

2) Filesystem - So what's the deal here? I've tried to NOT have a filesystem, and just assume that the executable is running within the "Project" folder for every platform. This assumption is very nice, because paths are all relative and are the same on every platform. Why do we need a filesystem again? In iOS does it have to be something like /ESApp/home/AssetDir? Can't you just make the system initialization change the working directory to be AssetDir to match the other builds and simplify everything?

3) Event-driven architecture - Lets try to find a way to NOT have those fuck-ugly #ifdefs in main() where we had to register render, update, etc. with the iOS event-driven shit. If that registry still needs to happen, then we need to add it to the LibGyro API and make them all work like that. They at least need to share the same paradigm.  :| 

[b]LibGyro Templates[/b]

The evil, secret goal of libGyro has always been to create an open library that would allow for slaves to port ES to other platforms FOR US. We're putting the initial work into the library to get it stable and efficient on the core platforms now, but in the future, it's the slaves (fans) who will be porting it to the PS3/Wii/360/Sega 32x for us. Because of this, I have made  /source/template and /include/template folders with blank, ready-to-implement libGyro functionality, so that people can immediately begin filling in functions to port the API to other platforms. The slaves will thank me.

So anyway, if MDK has a productive week, we may be able to completely finish the Dreamcast, Windows, OSX, Linux, and iOS implementations in a single weekend. Then I'll only have the PSP left before I take off my C hat and put back on my C++ hat and get back to the engine (like I know you all want me to).

Falco Girgis
Falco Girgis is the founder and lead software architect of the Elysian Shadows project. He was previously employed in the telecom industry before taking a chance on Kickstarter and quitting his job to live the dream. He is currently pursuing his masters in Computer Engineering with a focus on GPU architecture.