Hmm. I've only really played around with both (OpenGL 4.x and Vulkan), but while Vulkan had an insane amount of boilerplate initially, it was really intuitive to me. The situation of "what do I even have to google here?" never came up, while that happened a lot with OpenGL. If you already have a working code base obviously that doesn't really matter, but if I was writing a new engine today, I'd probably use Vulkan just because the API seems so much more intuitive to me. (Well, Vulkan or D3D11.)
Similar here, except i'm sticking with OpenGL 1.x (or 2.x for when i want shaders... or 3.x/4.x for when i want MORE shaders :-P), exactly because of the sheer simplicity of the API.
I have written a bit of Vulkan code (i wrote this[0] the day the spec came out, after banging on it for several hours - and i found the spec quite readable, at least from the side of someone who wants to use it... someone i know who worked on the implementation side has told me that it wasn't that great), but i find the API way too ugly and verbose for my taste.
I have considered writing a small OpenGL-like wrapper on top of it since i am concerned that OpenGL quality will deteriorate in the future (though that would break a TON of games, including the ultrapopular Minecraft), but for now things work fine.
Somehow ARB, and now Khronos keep forgetting that having something like MetalKit or DirectXTK as part of the specification really matters to onboard newbies.
OpenInventor could have been it, but SGI had other plans for it, and no one at either ARB or Khronos actually cared about it.
It isn't just newbies, i write OpenGL code for almost 2 decades, i see no reason to do use the harder parts of the API when i can simply use the easy parts :-P.
Though i disagree with OpenInventor, it is too complex and takes too much upon itself, which few wanted - see Direct3D Retained Mode which despite being a much simpler API (both compared to D3DIM and OpenInventor) didn't see much use and in an uncharacteristic move by Microsoft (especially at the time) it was removed from Direct3D.
A better solution would have been something like GLUT, but with a few more utilities thrown in (like vector math stuff - OpenGL implementations already have the code anyway, why not expose it?). And GLUT was more popular than OpenInventor ever hoped to be despite not offering more than a few basic things.
That middleware is a different beast though, they are different engines providing different solutions for different problems - you can't have any single one of them be the solution to all problem and most of them are very complicated to be defined as a standard that is supposed to live for decades.
OpenGL, Vulkan and Direct3D are at a level where they enable you to write your own engine, but not at the level where they provide your the engine themselves. Microsoft tried it with Direct3D RM and it didn't work and Sun also tried it with Java3D as the official way to do 3D in Java but also didn't catch on (it caught on more than D3DRM but that is mainly because there was no other official way - however its popularity paled in comparison to OpenGL bindings that appeared soon after and nowadays it has been reimplemented and lives on top of these bindings).
I'm considering switching away from OpenGL just to get rid of the global state. It's too tiring to track the OpenGL state just to realize you forgot to unset some texture in a completely different part of code.
My favourite of the newest APIs is Metal, because it's very easy to jump from OpenGL (triangle in Metal is about 30 lines of code). Perhaps WebGPU is an alternative once desktop translation layer is created (Google is working on one called Dawn).
There is another translation layer called wgpu (https://github.com/gfx-rs/wgpu) being developed as well. It's written in Rust and has bindings for C, Rust (https://github.com/gfx-rs/wgpu-rs), and some initial support for other languages (Python, Julia, Scopes, etc.).
Dawn and wgpu have also been collaborating to create a common set of WebGPU headers. The WIP headers are located at https://github.com/webgpu-native/webgpu-headers if you're interested in contributing or following.
TBH, opengl has been improving in that respect. See, e.g., glTextureSubImage2D[1], which takes a pointer to the referred texture, unlike glTexSubImage2D.
Yes, I am aware of the direct state access extensions. It was a good step forward, but a bit too late. And it still applies only to GL objects like buffers, samplers, textures. For things like setting scissor test, blending, depth test/write, color masks, binding textures, you're back to the global state.
I really like how vulkan tries to model modern hardware--reminiscent of C. I spent a few weekends writing a toy renderer with it and learned quite a bit about the modern graphics pipeline.
Writing vulkan for a quick hobby project is probably a bit much, but it seemed like a great choice where extra control is needed. Hopefully more libraries will mature take care of the dirty work (thousands of lines of initialization).
Is Vulkan the response to the fact that single cores aren't getting faster and games need to move towards multithreading? Is there an analogous solution for other systems like physics, collisions or pathfinding?
The TL;DR reason for those new 3D APIs is essentially to drastically reduce CPU work that needs to happen in the graphics driver layer in the "old" 3D APIs (but that also means that if your application isn't spending a lot of CPU time in the graphics driver, for instance because it is fillrate bound, then that application won't benefit much from moving to the new APIs).
OpenGL's original design was a "fine grained state machine" which doesn't map well to modern GPU architectures, and every time a "micro state" in that big state machine is changed the GL driver needs to translate that change into much coarser state that GPUs accept. But it turns out that many 3D application don't even need to change unique states one by one, so each frame your code translates mostly static and coarse "application rendering state" into GL's fine grained state, only to have the GL driver translate that fine grained state back into coarse GPU state.
That's just one piece of the puzzle but I think explains the motivation behind the modern 3D APIs best.
The "other" 3D-API, Direct3D already took steps to group fine grained state into coarser state (starting with D3D10 and D3D11), the problem there was that they didn't come up with a good solution for threaded rendering (generating rendering work on different CPU threads), that's the other big thing that the modern 3D APIs solve properly. You essentially build render command lists on multiple CPU threads, and then enqueue those command lists on the main thread to be processed by the GPU.
That's part of it, but there's other reasons too. Promit[1] had a nice post that described four goals of the new generation of APIs: improving validation, reducing the complexity of the driver, allowing useful multi-threading, and giving developers more control over how the available hardware is used (e.g. multi-GPU). It's not an exhaustive list, but he filled in some of the backstory quite well.
Im am eager to have a usable implementation of OpenGL over Vulkan, like Zink. At last, no more different implementations of OpenGL from one vendor to another.
Sure, but the vendor specific black box is getting much smaller. The idea is that it's better to have client and/or middleware code paths battle actual hardware differences than having them battle actual hardware differences and different sets of smoke and mirrors.
As flohofwoe has pointed out, yes this can currently be done using Nvidia specific extensions.
If you are interested, a friend of mine and a few coworkers pieced together a small proof-of-concept game engine in their spare time that uses Vulkan ray tracing on Nvidia RTX cards. They finally released it on GitHub a few days ago:
There are days when I wish that DirectX was the open standard, rather than the other way around. It's such a cleaner, better designed API - quite possibly because it was not open, and so it doesn't have anywhere near as much of the design-by-committeeisms that infect OpenGL/Vulkan.
It is not a next generation OpenGL, unless one wants to become an expert in driver and compiler development on top of mastering graphics programming.
Currently Khronos answer for those that don't want to become such experts it to stick with OpenGL, the problem is that is isn't gettting much updates beyond 4.6, and Vulkan is not getting a more developer friendly API.
Most devs will be better by choosing a middleware engine and just check the respective box of the desired graphics API backend.
I think expert in driver and compiler development is too much. It takes a lot more to get started, because it doesn't assume any defaults, but then it isn't too hard as long as you venture into topics like multithreading, but then in OpenGL you never went into multithreading in the first place.
I think the best middleground between OpenGL and middleware engines are translation layers such as gfx-rs and bgfx, which offer a low-level but userfriendly API and can compile to several different graphics API backends.
The theory was that OpenGL or any other API of that kind is reasonably easy to implement on top of Vulkan, providing very long term support and compatibility for OpenGL-based applications.
In practice, like 15 years ago, you can use Mesa3D which remains as comprehensive and unexciting as ever with constantly improving performance thanks to using Vulkan.
In the more mainstream part of practice, rendering engines get rid of OpenGL to use Vulkan because it's usually better.
Unfortunately the API update model hasn't changed much from OpenGL it seems ;)
...not talking about the idea to first test out new ideas in vendor-specific extensions, and then elevate them to the core, this is definitely a good idea - but about stacking new stuff on the previous API version, this approach is what turned OpenGL into the heap of accumulated cruft it is today.
Direct3D instead was a new API with each new major version, which in the beginning sounded insane, but turned out the better decision in the end because that made it possible to discard the cruft (the crucial part is that old API versions are frozen but still supported).
OpenGL deprecated the cruft in 3.0, removed it in 3.1, then added a way to get it back with the compatibility profile in 3.2. Turns out a lot of people still wanted that cruft and they were a big enough market that people listened to them. I believe it was CAD software asking for this which on Windows is the only major user of OpenGL.
Direct3D is mostly used for games so if your new game has to use a new API that's not _that_ big of a deal. You don't start from scratch every couple years with regular software though.
LOL. It is not the nextgen version of GL. It is a tool that would let you make your own OpenGL. It is way too low level and takes insane amount of work comparatively to OpenGL/DirectX. Sure big engine developers would love it. Smaller guys like myself: not so much.
That's the difference right here. If you are in business of making engine then yes you can benefit basing your engine on something like Vulcan. To me: I am not exactly doing games/engines. I do use hardware accelerated graphics for some other type of development. OGL/DirectX are already a nuisance enough for me (just a friggin too that I want to use and forget). Having to go even lower level will not make me any happy at all. Well that is unless I find suitable rendering library. Hopefully some candidates are available now, just have to find time to evaluate.
Talk by the same guy at GDC about porting Doom 3 to Vulkan and Stadia: https://stadia.dev/intl/en/blog/gdc-2019-session:first-light...