Rendering way too many rectangles on a canvas


First of all I’d like to say a huge thank you to all the team for such a briliant UI framework! It really saves my time and helps to advance my scientific research in AGI.

However, as time goes by and my dataset grew bigger I started facing an issue with canvas rendering. In my UI I do pretty straightforward rendering of a rectangular matrix (backed by ndarray) where each element is rendered as a square of a certain HSV augmented with some overlay geometry. User is able to pan and zoom the whole thing much like it’s done in the Game Of Life example.

It works amazing when the matrix is relatively small, say, less than 100 000 elements, but currently I’m operating with a 1316 × 1316 matrix which is 1 731 856 elements and the UI started to lose its momentum. Rendering pass is still < 20 ms whereas primitive generation can be as high as 100 ms and more. Parallelizing matrix traversal with Rayon to collect data for the fill_rectangles makes things even worse for some reason.

Of course, the first thing I did was to clip the matrix to the actual viewport and render only the visible part. On closeup views it’s working awesome, but it still doesn’t help, when I try to zoom out to show the whole matrix.

So the question is, is it possible to accelerate or cache primitive generation in this case, mostly for pan & zoom? Yes, I do use geometry::Cache but during panning the cache needs to be invalidated in order to reflect the changes and then I have no other option but to redraw everything.

Is it possible somehow to create the geometry beforehand and then apply a transformation afterwards? Something like rendering to a texture and then moving that texture in a viewport.

1 Like

This should be very easy to add, and it would unlock a lot of use cases. It’s an easy win. I can look into it soon.

1 Like

I was wondering, did you find some time to work on this? No pressure, just wanted to know the situation.

If this is indeed easy to add, maybe you could guide me through the code so I would implement it myself? Or at least give me a few hints.

I’ve looked into the Canvas and Frame internals and more or less understand how draw primitives get converted to actual triangles.

Still, the way I see it, it should be processed much later, probably in the shader machinery, where existing immutable geometry would potentially be transformed, or something like that.

@hecrj, a friendly reminder here :slight_smile:

I will look into it later today!

1 Like

Awesome! Looking forward to :pray:

Alright! I gave this a shot.

I was initially going to simply implement a translate method for Geometry, but I figured we could go ahead an implement scaling support at the rendering primitive level as well; so I ended up replacing the Translate primitive with a more generic Transform one.

Take a look at this PR and give a it a try:

Let me know if that works for you!

1 Like

Amazing! Thank you so much! Will do it shortly!

Alright, I’ve done some quick hacks to see if it would work for me.

Overall I must say that it works surprisingly well :tada:

Primitive generation phase is now effectively zero (dropped down from 350ms to 60μs) since now I don’t need to invalidate the cache during pan & zoom.

However there are a few things I’d like to note:

  1. The geometry drawn on a canvas can now spill over the canvas boundary when translated & scaled. I think, this can potentially be addressed by introducing a clip transformation that would cut everything outside a certain rect. If I’m not mistaken, it should directly map to an already existing Primitive::Clip.

  2. Since the scale transform is now applied to everything, all drawn shapes are now scaled, obviously. So a circle or a font becomes thicker proportionally. Depending on a task, this can be undesirable. For example, in my case I would like to have my circles bigger, but remain a hair thin. The same goes for fonts that should probably be re-rendered according to a new dimensions but keeping their original weight. Don’t sure if it’s possible at all, but it would be interesting to have something like a scaling strategy for shapes.

  3. On a really huge geometry with several millions of triangles, even when static, it can take up to 70ms to render it, since it’s O(n). Of course it’s still several orders of magnitude faster when compared to the original implementation. Yet, I believe it could potentially be solved by caching the geometry by rendering it to a texture. That way we wouldn’t even need to deal with triangles when drawing or even panning and the rendering could be done in O(1).

An example of the first two issues, exaggerated to the extreme:

The circles were supposed to be drawn on the left pane (the code map).

  1. That’s a bug! A pesky one that isn’t easy to fix since our layering strategy at the rendering level is not ideal currently.
  2. I’m afraid that’s not possible. The tessellated geometry is just a bunch of triangles; there is no way to discriminate after they are cached. Keep in mind you’d have the same issue with a texture. The solution here would be for you to split the circles into their own layer.
  3. Indeed. A texture-based approach would probably be better. The upcoming custom shader support should allow you to do this eventually. Check it out: [Feature] Custom Shader Widget by bungoboingo · Pull Request #2085 · iced-rs/iced · GitHub

Ah, I see. I saw Primitive::Text and I thought that the tesselation happens later in the pipeline, since the text there is still a bunch of characters and not a shape. So, I thought, the same applies to other primitive shapes, like circles.

Sure, but the texture allows to transform the stuff with zero latency and do the texture update asynchronously.

I see. Well, I’ve ended up with the following workaroud that does the job:

impl Geometry {
    pub fn clip(self, bounds: Rectangle) -> Self {
        match self {
            Self::TinySkia(primitive) => Self::TinySkia(primitive.clip(bounds)),
            #[cfg(feature = "wgpu")]
            Self::Wgpu(primitive) => Self::Wgpu(primitive.clip(bounds)),
let geometry = geometry
        Transformation::scale(state.scaling) *
    .clip(bounds); // extra clip

Everything is now rendered as expected, more or less:

Still, for some reason, the clipping area does not perfectly match the canvas bounds. See that white strip below the “Code Map” title bar? It shouldn’t be there. Also, parts of the canvas can be seen on the splitter space between the panes.

Strangely enough, the size of those white strips match the size of pane decorations. Looks like the bound calculations are incorrect in case of pane grid.

P.S.: Indeed, calculated bounds depends on panel position within a grid and changes according to total size above, and to the left of the canvas:

I’ve found the actual cause of the clipping error I mentioned above.

           Primitive::Clip { bounds, content } => {
                let bounds = (*bounds * transformation) * scale_factor;

In case of scaling transformation this will alter the bounds twice, hence the broken rendering.

P.S. Sorry for the noise. I double checked the result and unfortunately, it does not affect the rendering.

Yes, but you will most likely run into issues with anti-aliasing. The edges of the geometry in the texture will start to look pixelated when you zoom in.

Textures also have size limitations.

I encourage you to avoid trying to fix the issue. You can try to work around it, but deep changes are required to fix it consistently.