LineRenderer2D: GPU pixel-perfect 2D line renderer for Unity URP (2D Renderer)[Repost]

  • Post comments:0 Comments

Introduction
Vectorial solution
Bresenham solution
Line strips drawing
Optimizations
Code repository

[This was originally posted in the Unity forums]

Introduction

Unity provides developers with a great line rendering tool which basically generates a 3D mesh that faces the camera. This is enough for most games but, if you want to create 2D games based on pixel-art aesthetics, “perfect” lines do not fit with the rest of sprites, especially if the size of the pixels in those sprites do not match the size of the pixels of the screen. You will need lines that fulfill one main rule: each pixel may have a neighbor either in the same column or in the same row, but not in both. Unity does not help in this case, you need to work on your own solution.

There are several alternatives, you can just draw the line into a sprite, which will look awful in case you rotate it; you can use a texture and change it dynamically, drawing the line in the CPU side, with C#, using the SetPixels method and the Bresenham algorithm, which can be slow and is limited by the size of the texture (although it allows resizing the sprite to achieve whatever line-thickness you need); our you can use a shader in the GPU and either vectorial algebra along with some “magic” or a modified version of the Bresenham algorithm, as I am going to explain here.

Both shading methods have the following inputs in common:

  • Current screen pixel position.
  • The position of both line endpoints, in screen space.
  • The color of the line.
  • The line thickness.
  • The position of the origin (0, 0), in screen space (for screen adjustment purposes).

In Unity, we need just 1 sprite in the scene with whatever texture (it can be 1-pixel-wide repeating texture), a material with a shader (made in Shadergraph, in this case) and a C# script to fill the parameters of the shader in the OnWillRenderObject event. Since we are using a sprite and Shadergraph with the 2D Renderer, it works with both the 2D sorting system and the 2D lighting systems. In the C# script there must be something like this:

protected virtual void OnWillRenderObject()
{
    Vector2 pointA = m_camera.WorldToScreenPoint(Points[0]);
    Vector2 pointB = m_camera.WorldToScreenPoint(Points[1]);
    pointA = new Vector2(Mathf.Round(pointA.x), Mathf.Round(pointA.y));
    pointB = new Vector2(Mathf.Round(pointB.x), Mathf.Round(pointB.y));

    Vector2 origin = m_camera.WorldToScreenPoint(Vector2.zero);
    origin = new Vector2(Mathf.Round(origin.x), Mathf.Round(origin.y));

    m_Renderer.material.SetVector("_Origin", origin);
    m_Renderer.material.SetVector("_PointA", pointA);
    m_Renderer.material.SetVector("_PointB", pointB);
}

Vectorial solution

The vectorial solution is not perfect but it is the fastest. The main idea is to calculate the distance of a point in the screen to the line defined by other 2 points; if such distance is lower than or equals half of the thickness of the line, the screen point is colored.

The main problem of this approach is that the screen is not composed of infinite points, it is a grid whose rows and columns depend on the resolution and the physical screen. If we want to draw a line whose thickness is 1 pixel, we cannot compare the distance of the point to the line to 0.5, because that will make any pixel crossed by the imaginary line to be colored, producing that some parts of the line look wider.

We need to find a way to compare distances that gives us the appropriate points to color. I have to be honest, I am not a mathematician and did not have enough time to analyze the values to find the best method to calculate the adjustment factor, so I only found some constants by trial and error based upon an assumption: it seems that the slope of the line is related to the distance to compare, such distance is inversely proportional to how close the slope is to 45º. This relation is not exact, erroneous results are unavoidable using this method. The constant values I discovered were:

fBaseTolerance (minimum distance in any case): 0.3686
fToleranceMultiplier (applied depending on the slope): 0.34935

#define M_PI 3.1415926535897932384626433832795

vEndpointA = vEndpointA - fmod(vEndpointA, float2(fThickness, fThickness));
vEndpointB = vEndpointB - fmod(vEndpointB, float2(fThickness, fThickness));
vEndpointA = round(vEndpointA);
vEndpointB = round(vEndpointB);

// The tolerance is bigger as the slope of the line is closer to any of the 2 axis
float2 normalizedAbsNextToPrevious = normalize(abs(vEndpointA - vEndpointB));
float maxValue = max(normalizedAbsNextToPrevious.x, normalizedAbsNextToPrevious.y);
float minValue = min(normalizedAbsNextToPrevious.x, normalizedAbsNextToPrevious.y);
float inverseLerp = 1.0f - minValue / maxValue;

outDistanceCorrection = fBaseTolerance + fToleranceMultiplier * abs(inverseLerp);

Once we have the distance correction factor, we calculate whether the current screen point is close enough to the imaginary line. There are 2 corner cases when the line is either completely horizontal or completely vertical, in which case an offset is added just to avoid the round numbers that produce bad results (bolder line).

// The amount of pixels the camera has moved regarding a thickness-wide block of pixels
vOrigin = fmod(vOrigin, float2(fThickness, fThickness));
vOrigin = round(vOrigin);

// This moves the line N pixels, this is necessary due to the camera moves 1 pixel each time and the line may be wider than 1 pixel
// so this avoids the line jumping from one block (thickness-wide) to the next, and instead its movement is smoother by moving pixel by pixel
vPointP += float2(fThickness, fThickness) - vOrigin;
vEndpointA += float2(fThickness, fThickness) - vOrigin;
vEndpointB += float2(fThickness, fThickness) - vOrigin;
vEndpointA = vEndpointA - fmod(vEndpointA, float2(fThickness, fThickness));
vEndpointB = vEndpointB - fmod(vEndpointB, float2(fThickness, fThickness));
vEndpointA = round(vEndpointA);
vEndpointB = round(vEndpointB);
vPointP = vPointP - fmod(vPointP, float2(fThickness, fThickness));
vPointP = round(vPointP);
const float OFFSET = 0.055f;

// There are 2 corner cases: when the line is perfectly horizontal and when it is perfectly vertical
// It causes a glitch that makes the line fatter
if(vEndpointA.x == vEndpointB.x)
{
	vEndpointA.x -= OFFSET;
}

if(vEndpointA.y == vEndpointB.y)
{
	vEndpointA.y -= OFFSET;
}

float2 ab = vEndpointB - vEndpointA;
float dotSqrAB = dot(ab, ab);

float2 ap = vPointP - vEndpointA;
float dotPA_BA = dot(ap, ab);
float normProjectionLength = dotAP_AB / dotSqrAA;

float projectionLength = dotAP_AB / length(ab);
float2 projectedP = normalize(ab) * projectionLength;

bool isBetweenAandB = (normProjectionLength >= 0.0f && normProjectionLength <= 1.0f);
float distanceFromPToTheLine = length(ap - projectedP);

outIsPixelInLine = isBetweenAandB && distanceFromPToTheLine < fThickness * fDistanceCorrection;

In the red part of the source code you can see how every input point is adjusted to the bottom-left position of the blocks they belong to. For example, if the line has a thickness of 4 pixels, the screen is divided by an imaginary grid whose cells occupy 4×4 pixels; if the point is at [7.2, 3.4] it is moved to the position [4, 0]. In the following image dark squares represent the bottom-left corner of each 4×4 block and green squares are the pixels that are actually near to the line and that are treated as if they were in each corner.

This subtract module operation is what makes the line be drawn with the desired thickness. The round operation avoids a jittering effect produced by the floating point calculation imprecisions.

Since the camera can move 1 pixel at a time and the thickness of the line may be greater than 1 pixel, an undesired visual effect occurs: the line does not follow the camera per pixel, it abruptly jumps to the next block of pixels as the camera displacement is greater than the thickness of the line. To fix this problem we have to subtract the displacement of the camera inside a block (from 0 to 3, if the thickness is 4 pixels) to the position of every evaluated point (vPoint). In the source code, the yellow part uses an input point (vOrigin), whose position is [0, 0] in world space transformed to screen space, that is used for calculating the amount of pixels the camera has moved both vertically and horizontally. The modulo of the position is calculated using the thickness and it is subtracted to the thickness value too, so we know the camera offset inside a block of pixels.

Here we can see the results of this algorithm, setting the thickness to 4 pixels:

Bresenham solution

This solution uses the Bresenham algorithm so the result is perfect but the calculation is more expensive than the vectorial solution. For each pixel occupied by the sprite rectangle, the algorithm is executed from the beginning to the end of the line; if the current point of the line coincides with the current screen position being evaluated, it uses the line color and the loop stops; otherwise the entire line is checked and the time is wasted (the background color is used instead).

The same adjustment is applied to the input points as in the vectorial solution (yellow and red parts in the source code). The Bresenham implementations one can find out there use an increment of 1 to select the next pixel to be evaluated, in this version the increment equals the thickness of the line.

// The amount of pixels the camera has moved regarding a thickness-wide block of pixels
vOrigin = fmod(vOrigin, float2(fThickness, fThickness));
vOrigin = round(vOrigin);

// This moves the line N pixels, this is necessary due to the camera moves 1 pixel each time and the line may be wider than 1 pixel
// so this avoids the line jumping from one block (thickness-wide) to the next, and instead its movement is smoother by moving pixel by pixel
vPointP += float2(fThickness, fThickness) - vOrigin;
vEndpointA += float2(fThickness, fThickness) - vOrigin;
vEndpointB += float2(fThickness, fThickness) - vOrigin;
// This fixes every point to the bottom-left corner of the thickness-wide block it belongs to, so all pixels inside the block are cosidered the same
// If the block has to be colored, then all the pixels inside are colored 
vEndpointA = vEndpointA - fmod(vEndpointA, float2(fThickness, fThickness));
vEndpointB = vEndpointB - fmod(vEndpointB, float2(fThickness, fThickness));
vEndpointA = round(vEndpointA);
vEndpointB = round(vEndpointB);
vPointP = vPointP - fmod(vPointP, float2(fThickness, fThickness));
vPointP = round(vPointP);
// BRESENHAM ALGORITHM
// Modified to allow different thicknesses and to tell the shader whether the current pixels belongs to the line or not

int x = vEndpointA.x;
int y = vEndpointA.y;
int x2 = vEndpointB.x;
int y2 = vEndpointB.y;
int pX = vPointP.x;
int pY = vPointP.y;
int w = x2 - x;
int h = y2 - y;
int dx1 = 0, dy1 = 0, dx2 = 0, dy2 = 0;

if (w < 0)
{
    dx1 = -fThickness;
}
else if (w > 0)
{
    dx1 = fThickness;
}

if (h < 0)
{
    dy1 = -fThickness; 
}
else if (h > 0)
{
    dy1 = fThickness;
}

if (w < 0)
{
    dx2 = -fThickness;
}
else if (w > 0)
{
    dx2 = fThickness;
}

int longest = abs(w);
int shortest = abs(h);

if (longest <= shortest)
{
    longest = abs(h);
    shortest = abs(w);

    if (h < 0)
    {
        dy2 = -fThickness; 
    }
    else if (h > 0)
    {
        dy2 = fThickness;
    }
	
    dx2 = 0;
}

int numerator = longest >> 1;

outIsPixelInLine = false;

for (int i = 0; i <= longest; i += fThickness)
{
    if(x == pX && y == pY)
    {
        outIsPixelInLine = true;
        break;
    }

    numerator += shortest;

    if (numerator >= longest)
    {
        numerator -= longest;
        x += dx1;
        y += dy1;
    }
    else
    {
        x += dx2;
        y += dy2;
    }
}

Line strips drawing

If we want to draw multiple concatenated lines we could create multiple instances of the line renderer and bind their endpoints somehow, but there are cheaper ways to achieve line strips rendering to represent, for example, a rope.

If we were using ordinary shaders we could send a vector array with all the points of the line to be processed but, unfortunately, Shadergraph does not allow arrays as input parameters for now. A workaround is sending a 1D texture, which is not supported either, so we will have to use a 2D texture whose height is 1 texel and whose width equals the amount of points. Everytime the position of the points changes, the texture has to be updated. This is not the “main texture”, we are talking about an additional texture. Regarding the format of the points texture, it is necessary to use a non-normalized one, for example TextureFormat.RGBAFloat (R32G32B32A32F), otherwise a loss of resolution occurs and the points jitters on the screen. We will need to know also the amount of points and the way the texture is to be sampled so do not forget to pass in both parameters, the float and the sampler state.

Once we have the data available in our shader, we have to iterate through the array, which means enclosing the Bresenham implementation explained previously into a for loop, sampling the points texture and picking an endpoint A and an endpoint B for calculating that line segment. When all the point pairs have been used, the loop ends. This way we are using only one texture, one sprite and one material.

void IsPixelInLine_float(float fThickness, float2 vPointP, Texture2D tPackedPoints, float fPackedPointsCount, float fPointsCount, out bool outIsPixelInLine)
{
	// Origin in screen space
	float4 projectionSpaceOrigin = mul(UNITY_MATRIX_VP, float4(0.0f, 0.0f, 0.0f, 1.0f));
	float2 vOrigin = ComputeScreenPos(projectionSpaceOrigin).xy * _ScreenParams.xy;

	// The amount of pixels the camera has moved regarding a thickness-wide block of pixels
	vOrigin = fmod(vOrigin, float2(fThickness, fThickness));
	vOrigin = round(vOrigin);

	// This moves the line N pixels, this is necessary due to the camera moves 1 pixel each time and the line may be wider than 1 pixel
	// so this avoids the line jumping from one block (thickness-wide) to the next, and instead its movement is smoother by moving pixel by pixel
	vPointP += float2(fThickness, fThickness) - vOrigin;

	vPointP = vPointP - fmod(vPointP, float2(fThickness, fThickness));
	vPointP = round(vPointP);

	int pointsCount = round(fPointsCount);

	outIsPixelInLine = false;
		
	for(int t = 0; t < pointsCount - 1; ++t)
	{
		int xCoord = floor(t / 2.0f);
		float4 packedPoints = tPackedPoints.Load(int3(xCoord, 0, 0));
		float4 packedPoints2 = tPackedPoints.Load(int3(xCoord + 1, 0, 0));

		float2 worldSpaceEndpointA = fmod(t, 2) == 0 ? packedPoints.rg : packedPoints.ba;
		float2 worldSpaceEndpointB = fmod(t, 2) == 0 ? packedPoints.ba : packedPoints2.rg;
		float4 projectionSpaceEndpointA = mul(UNITY_MATRIX_VP, float4(worldSpaceEndpointA.x, worldSpaceEndpointA.y, 0.0f, 1.0f));
		float4 projectionSpaceEndpointB = mul(UNITY_MATRIX_VP, float4(worldSpaceEndpointB.x, worldSpaceEndpointB.y, 0.0f, 1.0f));
		
		// Endpoints in screen space
		float2 vEndpointA = ComputeScreenPos(projectionSpaceEndpointA).xy * _ScreenParams.xy;
		float2 vEndpointB = ComputeScreenPos(projectionSpaceEndpointB).xy * _ScreenParams.xy;

		vEndpointA = round(vEndpointA);
		vEndpointB = round(vEndpointB);
	
		vEndpointA += float2(fThickness, fThickness) - vOrigin;
		vEndpointB += float2(fThickness, fThickness) - vOrigin;

		vEndpointA = vEndpointA - fmod(vEndpointA, float2(fThickness, fThickness));
		vEndpointB = vEndpointB - fmod(vEndpointB, float2(fThickness, fThickness));
		vEndpointA = round(vEndpointA);
		vEndpointB = round(vEndpointB);
		 
		int x = vEndpointA.x;
		int y = vEndpointA.y;
		int x2 = vEndpointB.x;
		int y2 = vEndpointB.y;
		int pX = vPointP.x;
		int pY = vPointP.y;
		int w = x2 - x;
		int h = y2 - y;
		int dx1 = 0, dy1 = 0, dx2 = 0, dy2 = 0;

		if (w<0) dx1 = -fThickness ; else if (w>0) dx1 = fThickness;
		if (h<0) dy1 = -fThickness ; else if (h>0) dy1 = fThickness;
		if (w<0) dx2 = -fThickness ; else if (w>0) dx2 = fThickness;

		int longest = abs(w);
		int shortest = abs(h);

		if (longest <= shortest)
		{
			longest = abs(h);
			shortest = abs(w);

			if (h < 0)
				dy2 = -fThickness; 
			else if (h > 0)
				dy2 = fThickness;
			
			dx2 = 0;
		}

		int numerator = longest >> 1;

		for (int i=0; i <= longest; i+=fThickness)
		{
			if(x == pX && y == pY)
			{
				outIsPixelInLine = true;
				break;
			}

			numerator += shortest;

			if (numerator >= longest)
			{
				numerator -= longest;
				x += dx1;
				y += dy1;
			}
			else
			{
				x += dx2;
				y += dy2;
			}
		}
	}
}

Note: In this version, some additional optimizations have been implemented, see next section.

Optimizations

Sprite size fitting

In order to avoid shading unnecessary pixels, the drawing area should be as small as possible. This area is defined by the sprite in the scene. If a 1×1 pixel texture is used (with its pivot at the top-left corner) then the width and height will match the scale and calculations are simpler. 

Every time the position of the points change, the position and scale of the sprite change too. We only need to calculate the bounding box that contains the points of the line and expand it as many pixels as the thickness of the line, so pixel blocks greater than 1 pixel are not cut off.

Points texture packing

The size of the 2D texture used for sending a point array to the GPU can be halved. We are working with 2D points to every texel (Color, in C#) can store 2 points.

GPU-side point transformation

Instead of transforming the points of the line in the C# script it is better to postpone that calculation to the GPU. Points can be passed in world space and then, in the shader, multiplied by the view matrix, the projection matrix and the screen size to obtain their screen position. The origin parameter (vOrigin) can be removed and calculated in the shader too.

Code repository

https://github.com/QThund/LineRenderer2D

You can add the project as a Github package using the Package Manager, entering the URL: https://github.com/QThund/LineRenderer2D.git?path=/Assets

Final note: Thanks srslylawl to for the bug fix.

Leave a Reply