ASCII Renderer

This text is a written version of the video above, with future plans to incorporate graphics and figures for illustration.
As I'm working my way through the backlog of videos that were published before this website, I suggest reading the article alongside the video for a comprehensive experience.

The old school aesthetic of rendering 3D as ASCII characters in the command line has always been appealing to me. I find it very clever that these renderers use characters based on the density of their pixels to represent brightness instead of their intended meaning.

Today we'll try to replicate this style and explore how it can be used as a way to render video games.

I’m Leonard and welcome to Useless Game Dev. Let’s get started.

Parse Texture

I’m going to be leveraging Unity’s renderer, letting it do all the heavy lifting of rendering the scene, so I can convert the resulting image to ASCII.

The first step is to grab the texture from the camera, and blit it to a render texture. Since a single character will represent a group of pixels, the render texture has a super low resolution, once again letting Unity do all the heavy lifting of pixelating the texture for us.

private void Update()
{
    RenderTexture.active = m_texture;
    m_2DTexture.ReadPixels(new Rect(0, 0, m_renderTexture.width, m_renderTexture.height), 0, 0);
    string result = ParseTexture(m_2DTexture);

    m_text.text = result; // Send to TextMeshPro

    RenderTexture.active = null;
}

The real magic then is to convert the texture to a string of characters.

The goal of this is to take the gray scale of each pixel, and pick a character that has enough filled pixels to match this gray ratio.

Obviously, the color black will be represented by a space.

For the color white, an at-sign seems to be the densest character available.

I played around with multiple ranges of characters but eventually settled on this one.

private char[] m_sortedCharacters = (" .:-=+*#%@").ToCharArray();

private char GetCharacterFromGrayScale(float grayScale)
{
    return m_sortedCharacters[(int)(grayScale * m_sortedCharacters.Length)];
}

Converting the texture to string is really straightforward: traverse the texture pixel by pixel, getting the grayscale value of the color of each pixel, then picking the right character.

public string ParseTexture(Texture2D texture)
{
    string result = "";

    int height = texture.height;
    int width = texture.width;

    // Iterate through the texture
    for (int y = height - 1; y >= 0; y--)
    {
        for (int x = 0; x < width; x++)
        {
            var color = texture.GetPixel(x, y);
            float grayScale = color.grayscale;

            result += GetCharacterFromGrayScale(grayScale);
        }
    }

    return result;
}

It’s already looking pretty good.
The first easy improvement to make upon this is to add a curve to remap the gray scale, which allows us to adjust the contrast and thus tweak the final result.

Adding Color

As a second improvement I want to add color. After all, most terminals now support color, so a modern ASCII render should offer that feature.

Interestingly, if we consider a color in the HSV space, that’s for Hue Saturation Value, we can see that the character we pick is already taking care of the brightness component.

Meaning we want to colorize our character not with the pixel’s original color, but with a full-brightness version of that color.

This is how I’m doing it in code.

public string ParseTexture(Texture2D texture)
{
    string result = "";

    int height = texture.height;
    int width = texture.width;

    // Iterate through the texture
    for (int y = height - 1; y >= 0; y--)
    {
        for (int x = 0; x < width; x++)
        {
            var color = texture.GetPixel(x, y);
            float grayScale = m_curve.Evaluate(color.grayscale);

            if (m_useColor)
            {
                Color.RGBToHSV(color, out float h, out float s, out float l);
                Color primaryColor = Color.HSVToRGB(h, s, 1);
                result += $"<color=#{ColorUtility.ToHtmlStringRGB(primaryColor)}>";
            }

            result += GetCharacterFromGrayScale(grayScale);

            if(m_useColor)
            {
                result += "</color>";
            }
        }
    }

    return result;
}

And that’s how colors would look like if we didn’t use ASCII characters to render the brightness. Notice how most shades are removed by the fact that every color is at full brightness.

Optimization

I don’t think we’re going to add any more improvements to this for now, however it currently runs dreadfully slow. I’m talking 5fps kind of slow.

A bit of profiling tells me the main pain points currently are:

Let’s replace the basic string manipulation with a string builder, that’s what they’re for, and they’re bloody good at it.

Next, TextMeshPro. TMP is great, it’s crisp, it has a lot of features, but it turns out that creating a mesh each frame for over 9000 characters is not optimal. Besides, I don’t need most of the features it offers, so let’s compare it to the legacy text options available in Unity.

The legacy UI text renders really blurry, and doesn’t support rich-text colors. That’s a hard pass.

The Immediate Mode GUI, or ImGUI, is lightning fast, and does support rich text colors. So we’re going with that.

Looks Speed Rich-Text
TextMeshPro Great Awful Yes
Legacy UI Text Awful Okay No
Unity ImGUI Okay Blazin' Yes

Lastly, moving pixel colors from RGB space to HSV, then computing their hexadecimal “HTML” notation is really costly. And for this I have no solution because I really need those features. So I default to what I do best: caching the resulting string for each color in order to reuse the result, thus trading CPU time for Memory space. This really helps the framerate but it would be a scaling issue if I was making a proper game.

Moving to Render Feature

I’m back at a somewhat decent frame rate of approximately 20 fps, now the next thing I want to do is to move this to a Render Feature in Unity’s Universal Render Pipeline. That way I can drop it into any game project and see how it goes.

I’m looking at Dragon Crasher, a Unity demo 2D project. Looking at the color and the ASCII version side by side, the background is interfering too much with the rest of the game, which could be fixed easily by simply removing it or partially fading it.

Because of the low resolution, you can somewhat make out the dragon, but the three characters in the middle are just a blurry mess.

I doubled the resolution of the image and tried again. It’s still not great but a lot more readable. And the dragon’s flames look kinda cool.

Let’s try another free Unity demo project with the 3D Game Kit, in third person view. It’s still a mess because there’s waaay too much detail,, but somehow with the 3D camera moving constantly, the shapes appear clearer as the viewer gets a better perception of the depth of the scene.

At a higher resolution, it’s a lot more readable but at this point, it’s like the characters start looking like individual pixels, which of course is going to look a lot better.

So in conclusion, in order for this visual style to work, we need a high contrast game with very few moving objects on screen, and/or a 3D camera that is moving constantly, allowing the viewer to make out the perspective.

Have a good one!

Music Credits