OpenGL is on the way out. Are you serious?! Almost all game consoles base their graphics API off of the OpenGL specs (with exception to Xbox). Direct3D does not provide the flexibility that OpenGL does, to do on the fly geometry instructions in OGL its a simple glBegin(), in D3D you have to jump through all these hoops (taking everything and crunching it down to one call, several for vertex arrays) which TBH is a pain in the ass.
You seem to be saying that having a 10 year old subsystem is a bad thing. You change to improve, you don't change for the sake of changing. And extensions added to one card are adding to others aswell, see when nVidia or ATI create a new extension it is added to the spec. This means that the other companies can see it and implement it in their card, whats so double edged sword about that. Sure you have to wait for it to make it to the driver but the time is minimal.
And OpenGL is not on its way out. Every non windows OS uses OpenGL as the primary graphics API, and this is not likely to change since hell will freeze over before MS port it to any unix based OSs.
Whoa, hey now. I come in peace.
Yes, I do mean every bit of that. To be precisely specific, the 3D API in general is on the way out. It is not even close to accurate to say that "Almost all game consoles base their graphics API's off OpenGL specs". The PS2, nor the PS3 use OpenGL natively in any way, shape, or form. However, there are middleware toolchains which allow you to use OpenGL-ES for PS3, however PS2 was a different beast entirely. PS2 Linux development is also possible with OpenGL but the middleware is atrocious. Most of the middleware chains only attempt to make a direct PC port of your code to something the PS2 or 3 can understand, which is FAR less than desirable because these machines are not similar to the PC in many ways. The other thing to keep in mind is that OpenGL-ES is much closer to what was promised in OpenGL3 than what was delivered; so much so that it does not resemble OpenGL in many ways. Even Sony's support of PSGL (a special ver. of OpenGL-ES for the PS3) is minimal.
The graphics architecture of the PS2 is also very minimal, it uses several very generic SIMD chips (VU's or Vector Units) which do the majority of the T&L footwork. There are 2 vector units, the first can act as a co-processor to the CPU, the other operates completely seperately, but both in parallel with the CPU, then the GS (Graphics Synthesizer) performs the fragment ops and rasterization. The result is the graphics code is not very specific whatsoever because the processors are very generic. They're optimized (like Intel CPU's are optimized with SSE/2/3) for highly parallel 128bit vector ops, but they lack the optimizations that PC GPU's have in terms of onboard memory controllers seperate for texturing, vertex assembly, and so forth. This is why you simply don't see "OpenGL for PS2". Because it's not there.
The overwhelming majority (speaking from experience here) opt to use in house solutions and CTM (Close-to-Metal), which is really actually LibGCM for PS3 development. The PS3 is also in the same league opting out of a very highly specialized GPU and going for several more generic SIMD chips to carry out the bulk of the footwork.
Now, saying that "OpenGL is easier because you can pack draw calls inbetween a glBegin/glEnd call" is just....well I dont see where you're going with that. Most graphics programmers do not have such trivial issues. Vertex assembly isn't something thats terribly difficult nor is constructing a vertex buffer and executing draw calls from hardware buffered objects. As a matter of fact, this is preferred in OpenGL via VBO/IBO extensions. You'd find that the vertex assembly on 50,000 simply does not work within a glBegin/glEnd call. Thats horrendously inefficient. However, with FVF it is quite easy to quickly assemble a trivial vertex format and call non indexed render primitive calls on client side vertex lists. Now, you've got to understand that my criticism on OpenGL is on a critical level, I'm not really concerned with "How easy is it to draw a triangle" frivolties.
I don't think you're understanding how the vendor extensions thing works. It's not to say that if ATi comes out with an extension that it WILL be implemented into the core spec. Thats far from the case, and I'd urge you to find a case where that actually happens. If ATi can bring an extension to OpenGL that exploits hardware capability on their GPU's only thus edging nVidia out of the GL user competition, they will. They'd also have no interest in bringing that into OpenGL spec, that does not happen. There are regular contributors from both nVidia and ATi that serve on the ARB board but these core changes are very generic and are better for both parties concerned where it's vital for them that OpenGL succeed. "Other companies can see it and implement it in their card." No, absolutely not, the whole point of these is to "one up" the other by exploiting VENDOR SPECIFIC hardware exploits. That's why the vendor specific extensions often can NOT be implemented on other vendor cards without significant penalty by virtue of how their hardware operates. However, they "one up" each other all the time.....to the programmer's benefit and consequently to his chagrin as well.
Now let's think about the future here. Many of the biggest players in the realtime graphics arena (John Carmack, Tim Sweeney) are developing new technology that will exploit non-rasterized renderer architectures. Intel's new GPU the "Larabee" promises to potentially allow this thing to be blown wide open. There are many advantages to non-rasterized render architectures, and several still with rasterization. With more GPGPU solutions, like CUDA, people are exploring new more innovative ways to construct 3D vistas. I think this will be the way of the future, and over the next decade you'll see the 3D API's either dropping off the face of the earth or only resembling their former selves in proxy.
The reason having a 10 year old subsystem is not because of the fact that "You change to improve", its because the devices the API exploited are not even close to the devices of 10 years ago. Hardware buffered objects, programmable pixel pipeline, RTT, MRT....I could go on and on and on. None of these things were even close to being possible on decade old hardware. This functionality is now exploited via extensions to an aging subsystem of redundancy and inefficiency. OpenGL is in need of a pretty major face lift, and while we were promised one, I honestly don't expect anything that sweeping to happen. Again, due to reasons I have outlined in my prior post.
OpenGL is not on the way out completely for reasons I have outlined in my prior post. However, OpenGL support overall in the professional *game development* development arena is certainly waning. Primarily for a lack of cohesion and unity. These are for reasons both predictable and unpredictable (as the ARB has already shown us

).
It's not a war here. People will feel strongly about one or the other, but the bottom line is over the next 5 years at least I think you'll see professional support of OpenGL almost completely falling off in favor of more GPGPU in-house solutions (and D3D for that matter).