Vulkan viewport rendering

iced has a wgpu renderer and a shader widget to draw to a viewport of the window using the wgpu::Device that iced creates. What would the overhead be to extend this to support viewport rendering using wgpu-supported backend graphics APIs, specifically Vulkan? wgpu is an excellent rendering API, but is not fully featured enough to meet all users’ rendering needs. Further, identifying a device which supports a renderer’s required features, limits, and extensions is a necessary step for correct and performant rendering.

The major use case for this functionality would be to develop GUIs for 3D editors, e.g. CAD GUIs and 3D model editors. Initially, I am imagining one of the following paths would be required:

  • Write a Vulkan renderer implementation for iced using the public iced rendering APIs. The wgpu renderer already targets Vulkan, so it seems like an unnecessary burden to write a dedicated Vulkan renderer when the iced renderer can already use Vulkan.

or

  • The developer would control the creation of the graphics API types, i.e. the instance, device, etc.
  • iced could be given the created device and wrap it in a wgpu::Device to support its rendering. After a brief look at the wgpu API, I’m not sure if this is totally possible yet, but ideally this could be done using wgpu-hal or wgpu-core types.
  • The user’s application would draw to specified viewports of the window, e.g. via new widgets.

Finally, interacting with the objects drawn to these viewports may require lower level access to the event loop, e.g. imagine handling click events in the viewport.

Is there a better way to implement this functionality or is there a trivial way to do this today that I am missing?

Thanks!

Brainstorming here but it sounds like writing a vulkan renderer that has additional capabilities beyond what tiny-skia and wgpu offer today and placing it alongside those two would be the best path?

Ultimately I guess it all depends on what specific capabilities you’re looking to use from the Vulkan API and how you imagine the iced side of the API to look like and what you think is missing specifically from the current Renderer trait that you can’t do today.

In addition to the core Renderer trait, iced has various other subtraits like text::Renderer and geometry::Renderer which define additional capabilities the tiny-skia and wgpu renderers implement.

Your vulkan renderer would need to implement all of these but could also allow for some other three::Renderer trait that does whatever you need it to do, and your widgets, being generic over Renderer, could make use of those specific trait methods

I think I would imagine a good amount of decoupling of iced and the 3D renderer. Ideally, the 3D renderer wouldn’t have to be limited by an iced Renderer trait.

I imagine there would just be a widget that returns a viewport of a specified size and location and iced trust the user to draw to it, e.g.

struct MyRenderer {
    device: ash::Device,
    // Other data...
}

impl iced::widget::ViewportRender for MyRenderer {
    fn render(&mut self, viewport: iced::render::Viewport) {
        // Render to viewport using self.device. iced will not draw to this viewport.
    }

    // Also, somehow support event callbacks, e.g. on_click and on_hover that can mutate `MyRenderer`'s state
}