Page 1 of 2
Writing to SDL_Surface Pixels
Posted: Wed Jul 14, 2010 12:04 am
by X Abstract X
I'm trying to write a function to vertically flip an SDL_Surface but I get access violations everytime I try to write to the pixel data. Obviously I'm not doing something right, can anyone help?
Edit: Code removed for now.
Re: Writing to SDL_Surface Pixels
Posted: Wed Jul 14, 2010 12:15 am
by Ginto8
I didn't feel like looking at the code, but you really should take a look at lazyfoo's tutorial on the subject:
http://lazyfoo.net/SDL_tutorials/lesson31/index.php
Re: Writing to SDL_Surface Pixels
Posted: Wed Jul 14, 2010 12:23 am
by X Abstract X
I wrote mine after reading that article, looked the article over several times and I still can't figure out my problem.
Re: Writing to SDL_Surface Pixels
Posted: Wed Jul 14, 2010 12:30 am
by XianForce
Well nothing pops out from looking at it, but the only thing I could think the problem could be, is that integers aren't the same size on your system. So instead of doing "unsigned int" use "Uint32" Because if your unsigned ints happen to be larger than the Uint32, you'll definitely have an out of bounds error.
But also you may want to start debugging a bit... Set some breakpoints, and find the exact point you get the error from.
Re: Writing to SDL_Surface Pixels
Posted: Wed Jul 14, 2010 12:34 am
by X Abstract X
I figured something out... the function works on my 32 bit RGBA surface but not my 24 bit RGB surface. What can I do though besides converting the surface to 32 bit?
Re: Writing to SDL_Surface Pixels
Posted: Wed Jul 14, 2010 12:36 am
by XianForce
X Abstract X wrote:I figured something out... the function works on my 32 bit RGBA surface but not my 24 bit RGB surface. What can I do though besides converting the surface to 32 bit?
well your unsigned int is 32 bits, so you need something like a Uint24... but I don't know if SDL has that, or how you'd go about making one...
Re: Writing to SDL_Surface Pixels
Posted: Wed Jul 14, 2010 12:38 am
by X Abstract X
XianForce wrote:X Abstract X wrote:I figured something out... the function works on my 32 bit RGBA surface but not my 24 bit RGB surface. What can I do though besides converting the surface to 32 bit?
well your unsigned int is 32 bits, so you need something like a Uint24... but I don't know if SDL has that, or how you'd go about making one...
I must be able to do this by separating the components into unsigned bytes. I'll see what I can do.
P.S. Fuck, I hate working with all these wrapper data types.
Re: Writing to SDL_Surface Pixels
Posted: Wed Jul 14, 2010 2:06 am
by Ginto8
X Abstract X wrote:I figured something out... the function works on my 32 bit RGBA surface but not my 24 bit RGB surface. What can I do though besides converting the surface to 32 bit?
okey dokey... you can either blit it to a preformatted surface, or convert it. Unless you're doing padding for say opengl, which I don't think you are, look at SDL_ConvertSurface (
http://sdl.beuc.net/sdl.wiki/SDL_ConvertSurface)
Re: Writing to SDL_Surface Pixels
Posted: Wed Jul 14, 2010 2:20 am
by X Abstract X
Ginto8 wrote:X Abstract X wrote:I figured something out... the function works on my 32 bit RGBA surface but not my 24 bit RGB surface. What can I do though besides converting the surface to 32 bit?
okey dokey... you can either blit it to a preformatted surface, or convert it. Unless you're doing padding for say opengl, which I don't think you are, look at SDL_ConvertSurface (
http://sdl.beuc.net/sdl.wiki/SDL_ConvertSurface)
I stated that I want to try to avoid converting the surface because I think the performance may be better if I just swap the relevant bytes, I'm going to try and compare both methods. On another note, I'm wondering why I get this access violation when ALL SDL_Surfaces, regardless of their depth, store the pixel data in a 32 bit unsigned integer.
Re: Writing to SDL_Surface Pixels
Posted: Wed Jul 14, 2010 2:36 am
by Ginto8
X Abstract X wrote:Ginto8 wrote:X Abstract X wrote:I figured something out... the function works on my 32 bit RGBA surface but not my 24 bit RGB surface. What can I do though besides converting the surface to 32 bit?
okey dokey... you can either blit it to a preformatted surface, or convert it. Unless you're doing padding for say opengl, which I don't think you are, look at SDL_ConvertSurface (
http://sdl.beuc.net/sdl.wiki/SDL_ConvertSurface)
I stated that I want to try to avoid converting the surface because I think the performance may be better if I just swap the relevant bytes, I'm going to try and compare both methods. On another note, I'm wondering why I get this access violation when ALL SDL_Surfaces, regardless of their depth, store the pixel data in a 32 bit unsigned integer.
Converting the surface in one way or another is really quite optimal, seeing as otherwise it will have to automatically converted by SDL
every time you blit it. Look at lazyfoo's tutorial where it has "optimized" image loading. That is specifically designed so that the image conversion only happens once, and after that simply doesn't happen.
Re: Writing to SDL_Surface Pixels
Posted: Wed Jul 14, 2010 2:48 am
by X Abstract X
Ginto8 wrote:X Abstract X wrote:Ginto8 wrote:X Abstract X wrote:I figured something out... the function works on my 32 bit RGBA surface but not my 24 bit RGB surface. What can I do though besides converting the surface to 32 bit?
okey dokey... you can either blit it to a preformatted surface, or convert it. Unless you're doing padding for say opengl, which I don't think you are, look at SDL_ConvertSurface (
http://sdl.beuc.net/sdl.wiki/SDL_ConvertSurface)
I stated that I want to try to avoid converting the surface because I think the performance may be better if I just swap the relevant bytes, I'm going to try and compare both methods. On another note, I'm wondering why I get this access violation when ALL SDL_Surfaces, regardless of their depth, store the pixel data in a 32 bit unsigned integer.
Converting the surface in one way or another is really quite optimal, seeing as otherwise it will have to automatically converted by SDL
every time you blit it. Look at lazyfoo's tutorial where it has "optimized" image loading. That is specifically designed so that the image conversion only happens once, and after that simply doesn't happen.
The surface is being used to build a 24bit OpenGL texture.
Re: Writing to SDL_Surface Pixels
Posted: Wed Jul 14, 2010 3:50 pm
by Ginto8
X Abstract X wrote:Ginto8 wrote:X Abstract X wrote:Ginto8 wrote:X Abstract X wrote:I figured something out... the function works on my 32 bit RGBA surface but not my 24 bit RGB surface. What can I do though besides converting the surface to 32 bit?
okey dokey... you can either blit it to a preformatted surface, or convert it. Unless you're doing padding for say opengl, which I don't think you are, look at SDL_ConvertSurface (
http://sdl.beuc.net/sdl.wiki/SDL_ConvertSurface)
I stated that I want to try to avoid converting the surface because I think the performance may be better if I just swap the relevant bytes, I'm going to try and compare both methods. On another note, I'm wondering why I get this access violation when ALL SDL_Surfaces, regardless of their depth, store the pixel data in a 32 bit unsigned integer.
Converting the surface in one way or another is really quite optimal, seeing as otherwise it will have to automatically converted by SDL
every time you blit it. Look at lazyfoo's tutorial where it has "optimized" image loading. That is specifically designed so that the image conversion only happens once, and after that simply doesn't happen.
The surface is being used to build a 24bit OpenGL texture.
Are you sure you don't want a 32 bit rgba? for OpenGL it's always better to have transparency.
Re: Writing to SDL_Surface Pixels
Posted: Wed Jul 14, 2010 6:50 pm
by X Abstract X
Ginto8 wrote:X Abstract X wrote:Ginto8 wrote:X Abstract X wrote:Ginto8 wrote:X Abstract X wrote:I figured something out... the function works on my 32 bit RGBA surface but not my 24 bit RGB surface. What can I do though besides converting the surface to 32 bit?
okey dokey... you can either blit it to a preformatted surface, or convert it. Unless you're doing padding for say opengl, which I don't think you are, look at SDL_ConvertSurface (
http://sdl.beuc.net/sdl.wiki/SDL_ConvertSurface)
I stated that I want to try to avoid converting the surface because I think the performance may be better if I just swap the relevant bytes, I'm going to try and compare both methods. On another note, I'm wondering why I get this access violation when ALL SDL_Surfaces, regardless of their depth, store the pixel data in a 32 bit unsigned integer.
Converting the surface in one way or another is really quite optimal, seeing as otherwise it will have to automatically converted by SDL
every time you blit it. Look at lazyfoo's tutorial where it has "optimized" image loading. That is specifically designed so that the image conversion only happens once, and after that simply doesn't happen.
The surface is being used to build a 24bit OpenGL texture.
Are you sure you don't want a 32 bit rgba? for OpenGL it's always better to have transparency.
I'm not trying to be a jerk but, can you explain why it's better? Is it faster because it's a power of 2 and the graphics card bus width is usually (always?) a power of 2? If I have a hell of a lot of textures it sounds like a waste of memory to me. I think memory is probably more precious than processing power for my application.
Re: Writing to SDL_Surface Pixels
Posted: Wed Jul 14, 2010 8:47 pm
by Ginto8
X Abstract X wrote:I'm not trying to be a jerk but, can you explain why it's better? Is it faster because it's a power of 2 and the graphics card bus width is usually (always?) a power of 2? If I have a hell of a lot of textures it sounds like a waste of memory to me. I think memory is probably more precious than processing power for my application.
I don't know about the speed of 24bit vs 32 bit, but I DO know that the way SDL stores pixels, it's gonna be an array of 32 bit ints anyway, and will just end up ignoring 8 bytes. So it will most likely end up messing up on the transfer to opengl anyway. Second, a ton of textures? shouldn't you consolidate that via tile/sprite sheets, thus lowering the amount of textures necessary? Third, the amount of extra memory by adding another byte to every pixel is so insanely small in the overall scheme of things that you REALLY shouldn't be worried about it. As Donald Knuth puts it, "Premature optimization is the root of all evil," so you shouldn't optimize until you
actually have a speed/memory issue.
Re: Writing to SDL_Surface Pixels
Posted: Wed Jul 14, 2010 10:01 pm
by X Abstract X
Ginto8 wrote:X Abstract X wrote:I'm not trying to be a jerk but, can you explain why it's better? Is it faster because it's a power of 2 and the graphics card bus width is usually (always?) a power of 2? If I have a hell of a lot of textures it sounds like a waste of memory to me. I think memory is probably more precious than processing power for my application.
I don't know about the speed of 24bit vs 32 bit, but I DO know that the way SDL stores pixels, it's gonna be an array of 32 bit ints anyway, and will just end up ignoring 8 bytes. So it will most likely end up messing up on the transfer to opengl anyway. Second, a ton of textures? shouldn't you consolidate that via tile/sprite sheets, thus lowering the amount of textures necessary? Third, the amount of extra memory by adding another byte to every pixel is so insanely small in the overall scheme of things that you REALLY shouldn't be worried about it. As Donald Knuth puts it, "Premature optimization is the root of all evil," so you shouldn't optimize until you
actually have a speed/memory issue.
I've been using 24 bit textures for months without problem; one of the parameters for glTexImage2D() lets you specify the format that your pixel data is in. I've never heard of anyone putting their textures into a sheet. These aren't 32x32 tiles that you would use in a 2D game, they are 256x256, 512x512 or even bigger textures. A 512x512 texture consists of 262 144 pixels. At 4 bytes per pixel, that is an entire megabyte, cutting that down to 3/4 of a megabyte definitely seems worth it.