Current camera bounds and viewport transform

:information_source: Attention Topic was automatically imported from the old Question2Answer platform.
:bust_in_silhouette: Asked By ruimgoncalves
:warning: Old Version Published before Godot 3 was released.

How can I get the current camera world bounds.

I’m facing 2 problems, the first I cant seem to get the current active camera from the game engine, I have to specify the camera node path, get_viewport().get_camera() returns null

Second I’m using get_viewport_transform().affine_inverse() to get the global bounds of the camera, but if I resize the window, the coordinates get all mixed up.

I’m tying to implement a screen edge arrow that points to an offscreen node.
I managed to get it working but if for some reason the screen is resized the calculations gets wrong.

Here is the code, create a sprite, attach this code, select 2 nodes. Resize the window to see the problem.

Ok, the code works fine if strech_mode is viewport, previously was 2d

ruimgoncalves | 2016-05-19 19:04

:bust_in_silhouette: Reply From: Warlaan

I suggest implementing that with an appropriate shader rather than by calculating world space coordinates.

Basically the graphics card doesn’t care about world coordinates at all. It receives arbitrary data (e.g. model positions in model space) and arbitrary parameters (e.g. a matrix for the conversion from model space to screen space) and performs arbitrary code on it (e.g. a shader that applies the matrix to every vertex in the model data). After the vertex shader has been performed the graphics card expects to have a list of screen space coordinates that make up triangles on the screen, so it can apply the rasterizer on it. That’s the first operation where the meaning of the data you are working on is defined for the graphics card, everything before that is up to you (resp. up to the engine).
So by retrieving the camera matrix, inverting it and using that to transform the screen position of an arrow to world space and then passing the result to the graphics card along with a shader that transforms those coordinates back to screen space you are doing a lot of stuff and asking the hardware to perform several calculations every frame that are totally unnecessary.
That’s not to say that your solution might not result in a better performance depending on the restrictions of the hardware you are working on. Adding a new shader does have a certain overhead, but in most cases that’s either irrelevant, the benefit of reducing the number of calculations is bigger (e.g. when rendering a lot of those arrows) or you already have a shader that works on screen space coordinates (e.g. for UI elements).

Speaking of shaders that are already there: it should be possible to implement the same thing by rendering a 3d model in 2d space. There’s a demo for that, it might be helpful to check it out.

Yep, check out the demo, it’s probably the easiest solution and will probably result in good performance as well.

  • If all you want is an arrow ignore the camera matrix and implement the whole thing using a CanvasLayer.
  • If you want a 3d arrow that is static do the same thing with a rendering of the arrow.
  • If you want a 3d arrow that is animated or something use a viewport sprite to display a viewport that renders the arrow.
  • If you want a 3d arrow that has the correct distance from the camera and the correct lighting (i.e. behaves like a proper 3d object in the scene) then I’d suggest writing a custom shader that calculates the position in screen space rather than calculating a world position on the cpu just so it results in the correct screen space position in the vertex shader.

Warlaan | 2016-05-22 07:17

Thanks for your in depth explanations, but now I’m confused, I liked the CanvasLayer idea, since I’m implementing a 2D arrow.
But how can I ignore the camera matrix and get the screen bounds, and the positions of the nodes related to these bounds?

ruimgoncalves | 2016-05-23 16:48

Using a CanvasLayer is Godot’s way of saying “please ignore the camera matrix for these objects and use a custom matrix instead”. Or to put it differently: stuff on a CanvasLayer moves with the camera (unless you change the CanvasLayer’s offset). If you place a 2d element on a CanvasLayer without offset at position 0,0 it will be drawn at the top left corner of the screen, no matter what the camera position is.
Unlike a shader the CanvasLayer uses pixels as the unit of measure, so you need to use OS.get_screen_size() to determine at which coordinates the screen ends, but then positioning an arrow is a lot easier than the way you originally used.

All that’s left is determining the correct direction towards the target, and that should be fairly easy using Camera.unproject_position(Vector3 world_point).

Warlaan | 2016-05-23 19:27