UIGU源码分析16:RawImage

源码16:RawImage

  /// <summary>
    /// Displays a Texture2D for the UI System.
    /// </summary>
    /// <remarks>
    /// If you don't have or don't wish to create an atlas, you can simply use this script to draw a texture.
    /// Keep in mind though that this will create an extra draw call with each RawImage present, so it's
    /// best to use it only for backgrounds or temporary visible graphics.
    /// </remarks>
[RequireComponent(typeof(CanvasRenderer))]
[AddComponentMenu("UI/Raw Image", 12)]
public class RawImage : MaskableGraphic
{
    [FormerlySerializedAs("m_Tex")]
    [SerializeField] Texture m_Texture;
    [SerializeField] Rect m_UVRect = new Rect(0f, 0f, 1f, 1f);
    
    ...
}

看代码理解最快得就是看它得注释 ,UGUI源码里已经注明了:

RawImage 用于在UI系统里显示一张Texture2D纹理

但是它和Image得区别是啥?下面注释也说了,当不希望创建图集得时候,可以使用这个组件绘制纹理,但是是用RawImage会有额外得DrawCall,所以一半只用于背景或者临时可见得图案。


相对来说RawImage得实现比较简单

需要一张主贴图用于显示,当未在组件里给贴图得时候 会返回材质里的主帖图

        /// <summary>
        /// Returns the texture used to draw this Graphic.
        /// </summary>
        public override Texture mainTexture
        {
            get
            {
                if (m_Texture == null)
                {
                    if (material != null && material.mainTexture != null)
                    {
                        return material.mainTexture;
                    }
                    return s_WhiteTexture;
                }

                return m_Texture;
            }
        }

    /// <summary>
    /// UV rectangle used by the texture.
    /// </summary>
    public Rect uvRect
    {
        get
        {
            return m_UVRect;
        }
        set
        {
            if (m_UVRect == value)
                return;
            m_UVRect = value;
            SetVerticesDirty();
        }
    }

VRect常用于设置RawImage显示的范围。仅显示Rect内的内容


    public override void SetNativeSize()
    {
        Texture tex = mainTexture;
        if (tex != null)
        {
            int w = Mathf.RoundToInt(tex.width * uvRect.width);
            int h = Mathf.RoundToInt(tex.height * uvRect.height);
            rectTransform.anchorMax = rectTransform.anchorMin;
            rectTransform.sizeDelta = new Vector2(w, h);
        }
    }

首先根据贴图的宽高,以及UVRect显示范围,确定RawImage需要的宽高。随后让rectTransform的锚点相等,在集中型锚点下,设置sizeDelta就是设置rectTransform的宽高。


    protected override void OnPopulateMesh(VertexHelper vh)
    {
        Texture tex = mainTexture;
        vh.Clear();
        if (tex != null)
        {
            var r = GetPixelAdjustedRect();
            var v = new Vector4(r.x, r.y, r.x + r.width, r.y + r.height);
            var scaleX = tex.width * tex.texelSize.x;
            var scaleY = tex.height * tex.texelSize.y;
            {
                var color32 = color;
                vh.AddVert(new Vector3(v.x, v.y), color32, new Vector2(m_UVRect.xMin * scaleX, m_UVRect.yMin * scaleY));
                vh.AddVert(new Vector3(v.x, v.w), color32, new Vector2(m_UVRect.xMin * scaleX, m_UVRect.yMax * scaleY));
                vh.AddVert(new Vector3(v.z, v.w), color32, new Vector2(m_UVRect.xMax * scaleX, m_UVRect.yMax * scaleY));
                vh.AddVert(new Vector3(v.z, v.y), color32, new Vector2(m_UVRect.xMax * scaleX, m_UVRect.yMin * scaleY));

                vh.AddTriangle(0, 1, 2);
                vh.AddTriangle(2, 3, 0);
            }
        }
    }

这里是重写了Graphic里得OnPopulateMesh 用于后续将mseh数据应用到canvasRenderer中,这里基本和Graphic一样 ,唯独多了两行代码 :

            var scaleX = tex.width * tex.texelSize.x;
            var scaleY = tex.height * tex.texelSize.y;

没搞懂这个得计算意义,如果有知道得请在下面留言解答一下。

猜你喜欢

转载自blog.csdn.net/NippyLi/article/details/123603172