Page 1 of 1
To use GL or not to use GL ( and which GL to use )
Posted: Fri Jun 14, 2013 10:25 pm
by like80ninjas
Okay, so i'm at a bit of a crossroads and wanted a more educated opinion.
I've gotten pretty good with OpenGL over the past few years. Both fixed function, and Modern GL.
I'm also pretty good with SFML, and end up using it for compatibility.
My problem is that I prefer GL. I like to write everything my way, and think it ends up cleaner that way.
But, Fixed function is too out-dated, and I seem to be having trouble getting Modern GL to run on alot of devices.
For example, when I start a new project I often use my Girlfriend's laptop to test. Now it's only about 2 years old, but it isn't a very nice laptop and with full driver updates only supports GL 2.1 which effectively breaks anything I try to write in modern GL because the shaders aren't supported. Is this something I should just get over? or is there a middle-man version of GL I can use that both uses modern shaders and works on low-end opengl drivers.
I know for a fact SFML uses OpenGL as it's library and even supports shaders but I'm not sure what version they use and SFML apps run fine on my girlfriends Laptop. How does it do this?
Overall I just feel like I am missing something.
Re: To use GL or not to use GL ( and which GL to use )
Posted: Fri Jun 14, 2013 10:50 pm
by X Abstract X
You pretty much have 2 options.
Option 1: Choose an older version of OpenGL and make sure you don't use any functionality that is newer than that version. This is a good choice for most indies because most of us aren't trying to do anything cutting edge, graphics wise. GL 2.1 is a pretty popular version to target, it will let you write shaders using GLSL 1.20 and it is also pretty old/widely supported. You have to think about your target audience though and do some research into what versions of OpenGL are supported on what age of hardware, then make your decision on what version you want to target.
Option 2: Make use of different OpenGL versions based on the best possible version available on each user's system. This is what modern commercial games tend to do (I'm guessing SFML does this too), the benefit of doing this is obvious, you get to support a wide variety of users and make use of the newest functionality on each system. Obviously though this is going to make it way more time consuming and complicated to write your renderer, basically it's going to be like writing a whole bunch of renderers all in different versions of OpenGL and GLSL.
Re: To use GL or not to use GL ( and which GL to use )
Posted: Sat Jun 15, 2013 10:55 am
by dandymcgee
Option 3: "System Requirements"
Re: To use GL or not to use GL ( and which GL to use )
Posted: Mon Jul 08, 2013 7:36 pm
by superLED
I would go with Option 3, if you want to make the best game you can make. Then you don't need to focus on this problem and rather focus on the game itself.
Re: To use GL or not to use GL ( and which GL to use )
Posted: Tue Jul 09, 2013 9:04 pm
by qpHalcy0n
Welcome to the nebulous nature of the modern OpenGL API...
It's really quite crap. I absolutely abhor the API. I've written at great length about it and as predicted, it's not going to change anytime soon (ever). The best bet is to shoot as high as you can go which will be limited by your development platform. You can either say "You simply can't run this unless you have XXX hardware", or you can provide several pipelines that fork based on hardware caps. So in Quadrion we'd have a pipeline we'd shoot for, but if your device doesn't support SM4, then we'd fall back to some SM3 pathway. You can conditionally compile shaders on there so that you CAN fork it one way or another. It's an absolute ton of extra work.
My best advice is to focus on writing a very good effect framework. FX/CGFX were both pretty good solutions (CGFX the less good of the two) for attacking this problem. Currently neither are supported anymore because everyone has done what I just said....write a very good effect framework for yourself. If your back-end is OpenGL based, then the code will be GLSL, but the effect will encapsulate every piece of render state that is to be set for every pass, for every effect. In this sense, the effect deals w/ the pathways to take. It's less C++/C code to write and you can much more quickly deal w/ render bugs.
That said, if you're writing on a "shitty" laptop, it's likely you have an Intel integrated graphics chip going on there. Either the 4XXX or the 2XXX. Both of them are absolute crap. They shit on specification and their performance is 3 generations behind advertised at best. I made it a point to discard Intel graphics as a target platform completely. You might be well advised to do the same...