iced
has a wgpu
renderer and a shader widget to draw to a viewport of the window using the wgpu::Device
that iced
creates. What would the overhead be to extend this to support viewport rendering using wgpu
-supported backend graphics APIs, specifically Vulkan? wgpu
is an excellent rendering API, but is not fully featured enough to meet all users’ rendering needs. Further, identifying a device which supports a renderer’s required features, limits, and extensions is a necessary step for correct and performant rendering.
The major use case for this functionality would be to develop GUIs for 3D editors, e.g. CAD GUIs and 3D model editors. Initially, I am imagining one of the following paths would be required:
- Write a Vulkan renderer implementation for
iced
using the publiciced
rendering APIs. Thewgpu
renderer already targets Vulkan, so it seems like an unnecessary burden to write a dedicated Vulkan renderer when theiced
renderer can already use Vulkan.
or
- The developer would control the creation of the graphics API types, i.e. the instance, device, etc.
iced
could be given the created device and wrap it in awgpu::Device
to support its rendering. After a brief look at thewgpu
API, I’m not sure if this is totally possible yet, but ideally this could be done usingwgpu-hal
orwgpu-core
types.- The user’s application would draw to specified viewports of the window, e.g. via new widgets.
Finally, interacting with the objects drawn to these viewports may require lower level access to the event loop, e.g. imagine handling click events in the viewport.
Is there a better way to implement this functionality or is there a trivial way to do this today that I am missing?
Thanks!