Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[core][texture][text][shapes] Software renderer for 2D #1370

Closed
raysan5 opened this issue Sep 8, 2020 · 12 comments
Closed

[core][texture][text][shapes] Software renderer for 2D #1370

raysan5 opened this issue Sep 8, 2020 · 12 comments
Assignees
Labels
new feature This is an addition to the library

Comments

@raysan5
Copy link
Owner

raysan5 commented Sep 8, 2020

raylib includes a set of ImageDraw*() functions that rely only on CPU memory (no GPU required). Those functions can be used to implement a basic 2d software rendered. Actually, those functions have been improved a lot in performance lately.

Here there are some notes about how it can be implemented, still under consideration:

  • New compilation flag: USE_SOFTWARE_RENDERER -> Some modules are not required any more: rlgl.h, models.h, camera.h
  • Texture2D (GPU) should fallback to Image (CPU)
  • Some modules require functions mapping: core.c, textures.c, text.c and shapes.c
  • Some functions won't be usable -> code should be commented or pre-processed

Here some design details:

/// core.c
void InitWindow();                   -> Create Image framebuffers (back + front) and NO_CONTEXT window (just for inputs)
void ClearBackground(Color color);   -> ImageClearBackground(Color color);
void BeginDrawing(void);             -> Just timing
void EndDrawing(void);               -> [rlglDraw(), SwapBuffers(), PollInputEvents(), Timming()]

void BeginMode2D(Camera2D camera);                          -> Setup Image transform (matrix/kernel?)
void EndMode2D(void);                                       -> Apply Image transform
void BeginMode3D(Camera3D camera);                          -> no 3d
void EndMode3D(void);                                       -> no 3d
void BeginTextureMode(RenderTexture2D target);              -> Replace internal framebuffer Image by custom one
void EndTextureMode(void);                                  -> Reset drawing to internal framebuffer
void BeginScissorMode(int x, int y, int width, int height); -> Define drawing rectangle (check before ImageDraw() on framebuffer)
void EndScissorMode(void);                                  -> Reset drawing rectangle

static bool InitGraphicsDevice(int width, int height);      -> Create Image framebuffer
static void SetupFramebuffer(int width, int height);        -> Setup Image framebuffer? -> Probably not required
static void SetupViewport(int width, int height);           -> Set Image drawing area?
static void SwapBuffers(void);                              -> Copy Image back buffer to front buffers -> ISSUE: SWAP to SCREEN!


/// textures.c
Texture2D LoadTexture(const char *fileName);                                 -> LoadImage()  
Texture2D LoadTextureFromImage(Image image);                                 -> ImageCopy()
TextureCubemap LoadTextureCubemap(Image image, int layoutType);              -> LoadImage()
RenderTexture2D LoadRenderTexture(int width, int height);                    -> LoadImage()
void UnloadTexture(Texture2D texture);                                       -> UnloadImage() 
void UnloadRenderTexture(RenderTexture2D target);                            -> UnloadImage()
void UpdateTexture(Texture2D texture, const void *pixels);                   -> ImageDraw()
void UpdateTextureRec(Texture2D texture, Rectangle rec, const void *pixels); -> ImageDraw()
Image GetTextureData(Texture2D texture);                                     -> ImageCopy()
Image GetScreenData(void);                                                   -> ImageCopy() (internal framebuffer)

void GenTextureMipmaps(Texture2D *texture);                                  -> ImageMipmaps()
void SetTextureFilter(Texture2D texture, int filterMode);                    -> Set ImageResize() scale mode
void SetTextureWrap(Texture2D texture, int wrapMode);                        -> Set Image wrap mode -> out-of-scope?

void DrawTexture(Texture2D texture, int posX, int posY, Color tint);         -> ImageDraw()
void DrawTextureV(Texture2D texture, Vector2 position, Color tint);          -> ImageDraw()
void DrawTextureEx(Texture2D texture, Vector2 position, float rotation, float scale, Color tint);       -> ImageDraw()
void DrawTextureRec(Texture2D texture, Rectangle sourceRec, Vector2 position, Color tint);              -> ImageDraw()
void DrawTextureQuad(Texture2D texture, Vector2 tiling, Vector2 offset, Rectangle quad, Color tint);    -> ImageDraw()
void DrawTexturePro(Texture2D texture, Rectangle sourceRec, Rectangle destRec, Vector2 origin, float rotation, Color tint);         -> ImageDraw()
void DrawTextureNPatch(Texture2D texture, NPatchInfo nPatchInfo, Rectangle destRec, Vector2 origin, float rotation, Color tint);    -> ImageDraw()

/// text.c
void DrawFPS(int posX, int posY);                                                   -> ImageDrawText()
void DrawText(const char *text, int posX, int posY, int fontSize, Color color);     -> ImageDrawText()
void DrawTextEx(Font font, const char *text, Vector2 position, float fontSize, float spacing, Color tint);                  -> ImageDrawTextEx()
void DrawTextRec(Font font, const char *text, Rectangle rec, float fontSize, float spacing, bool wordWrap, Color tint);     -> no
void DrawTextRecEx(Font font, const char *text, Rectangle rec, float fontSize, float spacing, bool wordWrap, Color tint,
                   int selectStart, int selectLength, Color selectTint, Color selectBackTint);                              -> no
void DrawTextCodepoint(Font font, int codepoint, Vector2 position, float scale, Color tint);                                -> ImageDrawTextEx()

/// shapes.c
void DrawPixel(int posX, int posY, Color color);                                    -> void ImageDrawPixel(Image *dst, int posX, int posY, Color color);
void DrawPixelV(Vector2 position, Color color);                                     -> void ImageDrawPixelV(Image *dst, Vector2 position, Color color);
void DrawLine(int startPosX, int startPosY, int endPosX, int endPosY, Color color); -> void ImageDrawLine(Image *dst, int startPosX, int startPosY, int endPosX, int endPosY, Color color);
void DrawLineV(Vector2 startPos, Vector2 endPos, Color color);                      -> void ImageDrawLineV(Image *dst, Vector2 start, Vector2 end, Color color);
void DrawCircle(int centerX, int centerY, float radius, Color color);               -> void ImageDrawCircle(Image *dst, int centerX, int centerY, int radius, Color color);->
void DrawCircleV(Vector2 center, float radius, Color color);                        -> void ImageDrawCircleV(Image *dst, Vector2 center, int radius, Color color);
void DrawRectangle(int posX, int posY, int width, int height, Color color);         -> void ImageDrawRectangle(Image *dst, int posX, int posY, int width, int height, Color color);
void DrawRectangleV(Vector2 position, Vector2 size, Color color);                   -> void ImageDrawRectangleV(Image *dst, Vector2 position, Vector2 size, Color color);   
void DrawRectangleRec(Rectangle rec, Color color);                                  -> void ImageDrawRectangleRec(Image *dst, Rectangle rec, Color color);                  
void DrawRectangleLines(int posX, int posY, int width, int height, Color color);    -> void ImageDrawRectangleLines(Image *dst, Rectangle rec, int thick, Color color);
// Several shapes drawing functions not supported...

There is one big issue and actually the main stopper for this implementation: How to do the screen buffer display of our Image framebuffer?

Every platform would require a custom mechanism to push a bunch of pixels to the screen. GLFW has a long-time open issue where some mechanism was proposed: glfw/glfw#589

Two main concerns: how to integrate this in raylib (in a simple non-intrusive way) and how useful is this feature. Some answers:

  • Use preprocessor USE_SOFTWARE_RENDERER to completely separate default functions from software based ones. On textures.c, text.c and shapes.c seems feasible but core.c would require some careful tweaking and probably some functions reorganization or internal preprocessing.
  • Software renderer could be useful when targeting old devices (consoles), some low-end embedded devices or custom devices with a display attached (Arduino, FPGA-based...). But on those cases, maybe raylib is too high level and beter lower level options exist.

In any case, I opened the issue for reference and comments. Please, do not send a PR, it's still a feature under consideration.

@raysan5 raysan5 self-assigned this Sep 8, 2020
@mackron
Copy link
Contributor

mackron commented Sep 8, 2020

Every platform would require a custom mechanism to push a bunch of pixels to the screen.

For X11 I use XShmPutImage() if the shared memory extension is available and fall back to XPutImage() if not: https://github.com/mackron/mintaro/blob/master/mintaro.h#L1232. Some of those examples in that GLFW issue you posted look less than ideal. One of them is looping over each pixel and then calling XDrawPoint() which seems crazy inefficient to me. If you're outputting directly to the window, I don't think you need to be creating a new XImage object each frame either.

For Win32 I use StretchBlt(): https://github.com/mackron/mintaro/blob/master/mintaro.h#L1807.

@Berni8k
Copy link
Contributor

Berni8k commented Sep 8, 2020

Id say it is an advantage not having any way of rendering it to the screen.
This frees raylib of any platform specific libraries and allows it to compile for any C platform. Just point it to a frame buffer and away you go. Modern MCUs are powerful enough to run it these days. Another use for it is opening a png file from a console app, manipulating it, then saving it to png again, no OpenGL instance or window needed.

If they want to display the image then they can bring there own method that fits the platform.

Repository owner deleted a comment from beaswag Sep 8, 2020
@chrisdill
Copy link
Contributor

I think this is a interesting idea. I agree with compiling raylib without requiring the gpu but I also feel some parts might be better made explicit instead. Here are a couple of examples:

  1. How does Texture2D store the Image? If LoadTexture uses LoadImage internally then you need your own storage and id setup similar to opengl. Same for types like RenderTexture2D etc.

  2. I can't take a existing raylib program and enable software rendering without checking what drawing functions are supported. It may be better for users to create their own specific set of rendering functions that they can switch between instead.

@mackron
Copy link
Contributor

mackron commented Sep 8, 2020

Id say it is an advantage not having any way of rendering it to the screen.

I could not agree with this any less. People aren't going to want to stuff around trying to figure out how to get their output onto screen. There's no reason why you can't support both rendering to in-memory buffers and rendering to the screen, which is how it should be in my opinion, because it'll all use a common infrastructure under the hood anyway (I would think).

If they want to display the image then they can bring there own method that fits the platform.

That's just a lazy implementation. raylib is a game development library, and people play games by looking at the screen. Almost everybody who uses raylib is going to want to display the output, and expecting them to just implement their own method to display the output is absolutely ridiculous.

@raysan5
Copy link
Owner Author

raysan5 commented Sep 8, 2020

@mackron @Berni8k I think it depends on the kind of user that feature is intended to. Probably low-level embedded-devices coders would feel more confortable with just the pixels on memory, ready to be sent anywhere while the common raylib user (students, hobbyist, people that want to put something on screen quickly...) would prefer to just enable a software renderer and expect the same result as GPU-base rendering... what makes me think about a real use case... why a raylib user would enable the software rendering nowadays?

In any case, both options are possible, the only difference is a SwapBuffers() call. A specific flag can be created for that (i.e. AVOID_SCREEN_BUFFER_SWAP).

@chrisdill About your questions:

  1. Texture2D does not store the Image data just the OpenGL id when uploaded to GPU. Actually I don't need to create and id system, just doing typedef Image Texture2D should be enough.

  2. raylib already provides ImageDraw*() functions set for software rendering that users can use by themselfs, actually, that software rendering functionality can already be used skipping most parts of raylib.

@arydevy
Copy link

arydevy commented Sep 8, 2020

waw a good fecer with the cpu textures

Repository owner deleted a comment from arydevy Sep 8, 2020
@MikeHart66
Copy link

Imho, if it still depends of OpenGL or a different API, which GLFW/SDL/X11 uses, it isnt really software rendering.

@chriscamacho
Copy link
Contributor

chriscamacho commented Sep 9, 2020

@MikeHart66 as humble as your opinion is - its wrong, you can use these (image routines) without initialising a window (only then does it create a gl context) all the above (no gpu calls) can be used happily, there is no GLFW or X11 usage and it just works without the GPU - thankfully raylib as nothing to do with sdl.....

no gpu image functions are super useful for tools you make to call from a makefile.... (where you really don't need the gpu involved!) and you might need to exactly these conversions

functions not requiring the GPU are clearly marked in raylib.h // comment above a group of functions...

@Berni8k
Copy link
Contributor

Berni8k commented Sep 9, 2020

Re @raysan5 Understandable. Tho what is the purpose of this no GPU mode then? Every modern PC/Mac/Android/iOS platform has support for OpenGL so what would be the motivation for someone to not use it?

It does remove the dependency on OpenGL, but at the same time introduces a dependency on to some other platform specific API to create a window and pull inputs. The only thing i can see this being useful for is getting raylib to run on an old Win 98 machine that does not have proper hardware accelerated graphics support. Or perhaps raylib for MS-DOS with bitmap VGA graphics?

@chriscamacho
Copy link
Contributor

chriscamacho commented Sep 10, 2020 via email

Repository owner deleted a comment from arydevy Sep 18, 2020
@raysan5 raysan5 added the new feature This is an addition to the library label Nov 1, 2020
@RobLoach
Copy link
Contributor

Would TinyGL be an option around this? It would keep the same GL calls, but render through the software-implementation subset of OpenGL. I haven't tested using GLFW with TinyGL, however. With SDL, you can use SDL_SWSURFACE or SDL_CreateSoftwareRenderer(), glfw doesn't give you a similar option unfortunately.

@raysan5
Copy link
Owner Author

raysan5 commented Sep 23, 2021

I'm closing this feature for the moment, supporting a software-renderer does not seem feasible for the moment. Users requiring it, can use the Image*() API and manage the swap-buffers manually to the intended display.

@raysan5 raysan5 closed this as completed Sep 23, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
new feature This is an addition to the library
Projects
None yet
Development

No branches or pull requests

8 participants