笔记本可以加内存吗16G内存为什么跑不了占内存超过8G的程序?

Unity Shaders and Effects Cookbook (4-6)撼动的实时反射 动态立方图系统 - 移动开发当前位置:& &&&Unity Shaders and Effects Cookbook (4-6)撼动的实Unity Shaders and Effects Cookbook (4-6)撼动的实时反射 动态立方图系统&&网友分享于:&&浏览:0次Unity Shaders and Effects Cookbook (4-6)震撼的实时反射 动态立方图系统昨天逛街的时候看到太平鸟里面摆了个金属的米老鼠,于是职业病犯了,一直在想金属的颜色是什么,这个反射该怎么写,想不出来……今天正好看到动态反射立方图系统这一节,看完觉得很别扭,因为书上介绍的是事先踩点生成Cubemap的方式而不是实时的。于是到官方文档找到实时反射的代码,做了一个比较花俏的场景,运行之后吃了一大惊,实时反射是如此的震撼。。第四章第一节第二节介绍了创建Cubemap,然后学习了如何使用。Unity Shaders and Effects Cookbook (4-1)(4-2)静态立方体贴图的创建与使用
我也说过了,是静态的Cubemap,就是说这个Cubemap是采集自一个地方,不会随着位置的移动去动态采集周围环境来产生反射。 这一节学习实时动态立方图。书上 4.6 这一节讲的是简单的动态立方图系统,意思就是在一个房间内的几个点,进行采集,然后生成好几个Cubemap。然后判断物体位置,靠近了采集点的时候就选用当前采集点的Cubemap。我画了个图1、2、3、4是采集点,在这4个点分别创建 Cubemap1、Cubemap2、Cubemap3、Cubemap4。中间的是人。当人走到靠近 采集点3 的位置,就使用 Cubemap3。转自http://blog.csdn.net/huutu .cn这样是动态的,但并不是实时。我还是到网上寻找实时的反射,参照Unity 文档实现。/ScriptReference/Camera.RenderToCubemap.html文档上是 js 的,不过并不影响阅读,因为代码很简单。我翻译成了 C#的。先来看看效果图。和4.1 学习的静态立方图 相比,这一节学习的实时立方图,只是把立方图修改为实时生成了。还记得立方图生成 是使用了 Camera的APIcamera.RenderToCubemap(cubemap);在每次人物移动后,就调用这个函数更新 Cubemap,就达到了实时的目的。之前我们在 Assets 中创建了 Cubemap ,是因为 创建 和 使用 不是在一起做的。这一次因为是实时的,所以不用在 Assets 中创建 Cubemap了,创建后用变量存起来,在 LateUpdate 中更新。搭建好场景,我这里为了好看点就涂涂抹抹做了一个地形。添加一个 Sphere做反射,材质用第一节 的就可以了,复制一份。然后材质中的 Cubemap 去掉,因为现在我们用实时生成的了。在 Sphere 上挂上下面的脚本using UnityE
using System.C
public class RealtimeReflection : MonoBehaviour
Camera reflectionC
// Use this for initialization
void Start ()
GameObject go = new GameObject(&Reflection Camera&, typeof(Camera));
reflectionCamera = go.
go.hideFlags = HideFlags.HideAndDontS
go.transform.position = transform.
go.transform.rotation = Quaternion.
reflectionCamera.farClipPlane = 100;
reflectionCamera.enabled =
cubemap = new RenderTexture(128, 128, 16);
cubemap.isCubemap =
cubemap.hideFlags = HideFlags.HideAndDontS
renderer.sharedMaterial.SetTexture(&_Cubemap&, cubemap);
reflectionCamera.transform.position = transform.
reflectionCamera.RenderToCubemap(cubemap, 63);
void RenderCubemap()
// Update is called once per frame
void Update ()
void LateUpdate()
reflectionCamera.transform.position = transform.
reflectionCamera.RenderToCubemap(cubemap, 63);
主要的就是 LateUpdate 中的 RenderToCubemappublic bool RenderToCubemap(RenderTexture cubemap, int faceMask = 63); faceMask 是指 哪几个面需要渲染。63 就是 111111 ,需要渲染就写1.官方文档/ScriptReference/Camera.RenderToCubemap.html转自http://blog.csdn.net/huutu .cn这 6 个 1的顺序是:右 左 上 下 前 后官方文档/ScriptReference/CubemapFace.html主要就是上面脚本,控制移动的脚本如下using UnityE
using System.C
public class MoveController : MonoBehaviour
[SerializeField]
Transform cameraT
// Use this for initialization
void Start () {
void OnGUI()
if (GUI.RepeatButton(new Rect(250, 50, 90, 90), &Forward&))
transform.localPosition += new Vector3(0, 0, 10 * Time.deltaTime);
if (GUI.RepeatButton(new Rect(250, 150, 90, 90), &Back&))
transform.localPosition += new Vector3(0, 0, -10 * Time.deltaTime);
if (GUI.RepeatButton(new Rect(150, 150, 90, 90), &Left&))
transform.localPosition += new Vector3(-10 * Time.deltaTime, 0, 0);
if (GUI.RepeatButton(new Rect(350, 150, 90, 90), &Right&))
transform.localPosition += new Vector3(10 * Time.deltaTime, 0, 0);
if (GUI.RepeatButton(new Rect(650, 50, 90, 90), &Up&))
transform.localPosition += new Vector3(0, 10 * Time.deltaTime, 0);
if (GUI.RepeatButton(new Rect(650, 150, 90, 90), &Down&))
transform.localPosition += new Vector3(0, -10 * Time.deltaTime, 0);
// Update is called once per frame
void Update ()
if (Input.GetKey(KeyCode.W))
transform.localPosition += new Vector3(0, 0, 10 * Time.deltaTime);
if (Input.GetKey(KeyCode.A))
transform.localPosition += new Vector3(-10 * Time.deltaTime,0, 0);
if (Input.GetKey(KeyCode.S))
transform.localPosition += new Vector3(0, 0, -10 * Time.deltaTime);
if (Input.GetKey(KeyCode.D))
transform.localPosition += new Vector3(10 * Time.deltaTime, 0, 0);
if (Input.GetKey(KeyCode.Q))
transform.localPosition += new Vector3(0, -10 * Time.deltaTime, 0);
if (Input.GetKey(KeyCode.E))
transform.localPosition += new Vector3(0, 10 * Time.deltaTime, 0);
void LateUpdate()
cameraTrans.localPosition = transform.localPosition + new Vector3(0f, 1.0f, -2f);
我也放到手机上 魅蓝 metal 测试了下,非常卡。手机上的截图但是效果真的很震撼。项目打包下载:/s/1dFqb7NVAPK下载:/s/1gf2iJcv
12345678910
12345678910
12345678910 上一篇:下一篇:文章评论相关解决方案 1234567891011 Copyright & &&版权所有ARM Guide to Unity: Enhancing Your Mobile Games : 5.1 Implementing reflections with a local cubemap in Unity
5.1 Implementing reflections with a local cubemap in Unity
Graphics developers have always tried to find computationally cheap methods to
implement reflections.
One of the first solutions is spherical mapping. This technique simulates
reflections or lighting on objects without using expensive ray-tracing or lighting
calculations.
The spherical surface is mapped into 2D:
Figure 5-1 Environment map on a sphere
Figure 5-2 Spherical surface 2D mapping
This approach has several disadvantages, but the main problem is the distortions
that occur when mapping a picture onto a sphere. In 1999, it became possible to use
cubemaps with hardware acceleration. Cubemaps solved the problems of image distortions,
viewpoint dependency and computational inefficiency related to spherical mapping.
Figure 5-3 Unfolded Cube
Cubemapping uses the six faces of a cube as the map shape. The environment is
projected onto each side of a cube and stored as six square textures, or unfolded into
six regions of a single texture. The cubemap is generated by rendering the scene from a
given position with six different camera orientations with a 90 degree view frustum
representing each a cube face. Source images are sampled directly. No distortion is
introduced by resampling into an intermediate environment map.
Figure 5-4 Infinite reflections
To implement reflections based on cubemaps, evaluate the reflected vector R and use it to fetch the texel from the cubemap _Cubemap using the available texture lookup function
texCUBE():
float4 color = texCUBE(_Cubemap, R);
The normal N and view vector D are passed to fragment shader from the vertex shader.
The fragment shader fetches the texture color from the cubemap:
float3 R = reflect(D, N);
float4 color = texCUBE(_Cubemap, R);
This approach can only reproduce reflections correctly from a distant
environment where the cubemap position is not relevant. This simple and effective
technique is mainly used in outdoor lighting, for example, to add reflections of the
Figure 5-5 Incorrect reflections
If you use this technique in a local environment it produces inaccurate
reflections. The reason why the reflections are incorrect is that in the expression
float4 color = texCUBE(_Cubemap, R); there is no binding to the
local geometry. For example, if you walk across a reflective floor looking at it from
the same angle you always see the same reflection. The reflected vector is always the
same and the expression always produces the same result. This is because the direction
of the view vector does not change. In the real world reflections depend on both viewing
angle and viewing position.
Kevin Bjorke proposed a solution to this problem in 2004. This involves binding
to the local geometry in the procedure to calculate the reflection. See GPU Gems 2004, chapter 19.
Figure 5-6 Local correction using a bounding
A bounding sphere is used as a proxy volume that delimits the scene to be
reflected. Instead of using the reflected vector R to
fetch the texture from the cubemap a new vector R’ is
used. To build this new vector you find the intersection point P in the bounding sphere of the ray from the local point V in the direction of the reflected vector R. Create a new vector R’ from the center of the cubemap C,
where the cubemap was generated, to the intersection point P. Use this vector to fetch the texture from the cubemap.
float3 R = reflect(D, N);
Find intersection point P
Find vector R’ = CP
float4 col = texCUBE(_Cubemap, R’);
This approach produces good results in the surfaces of objects with a near
spherical shape but reflections in plane reflective surfaces are deformed. Another
drawback of this method is that the algorithm to calculate the intersection point with
the bounding sphere solves a second degree equation and this is relatively complex.
In 2010 a developer proposed a better solution in a forum at . This approach replaces the previous bounding sphere
by a box and solves the deformations and complexity problems of the previous method. For
more information see:
Figure 5-7 Local correction using a bounding box
A more recent work in 2012 by Sebastien Lagarde uses this new approach to
simulate more complex ambient specular lighting using several cubemaps and uses an
algorithm to evaluate the contribution of each cubemap and efficiently blend on the GPU.
Table 5-1 Differences between infinite and local cubemaps
Infinite Cubemaps
Local Cubemaps
The following image shows the scene with correct reflections using local
Figure 5-8 Correct reflections
Shader Implementation
The following code shows the shader implementation in Unity of reflections
using local cubemaps.
The vertex shader calculates three magnitudes that are passed to the
fragment shader as interpolated values:
The vertex position.
The view direction.
The normal.
These values are in world coordinates.
vertexOutput vert(vertexInput input)
output.tex = input.
// Transform vertex coordinates from local to world.
float4 vertexWorld = mul(_Object2World, input.vertex);
// Transform normal to world coordinates.
float4 normalWorld = mul(float4(input.normal,0.0), _World2Object);
// Final vertex output position.
output.pos = mul(UNITY_MATRIX_MVP, input.vertex);
// ----------- Local correction ------------
output.vertexInWorld = vertexWorld.
output.viewDirInWorld = vertexWorld.xyz - _WorldSpaceCameraP
output.normalInWorld = normalWorld.
The intersection point in the volume box and the reflected vector are computed
in the fragment shader. You build new local corrected reflection vector and use it to
fetch the reflection texture from the local cubemap. You then combine the texture and
reflection to produce the output color:
float4 frag(vertexOutput input) : COLOR
float4 reflColor = float4(1, 1, 0, 0);
// Find reflected vector in WS.
float3 viewDirWS = normalize(input.viewDirInWorld);
float3 normalWS = normalize(input.normalInWorld);
float3 reflDirWS = reflect(viewDirWS, normalWS);
// Working in World Coordinate System.
float3 localPosWS = input.vertexInW
float3 intersectMaxPointPlanes = (_BBoxMax - localPosWS) / reflDirWS;
float3 intersectMinPointPlanes = (_BBoxMin - localPosWS) / reflDirWS;
// Looking only for intersections in the forward direction of the ray.
float3 largestParams = max(intersectMaxPointPlanes, intersectMinPointPlanes);
// Smallest value of the ray parameters gives us the intersection.
float distToIntersect = min(min(largestParams.x, largestParams.y), largestParams.z);
// Find the position of the intersection point.
float3 intersectPositionWS = localPosWS + reflDirWS * distToI
// Get local corrected reflection vector.
Float3 localCorrReflDirWS = intersectPositionWS - _EnviCubeMapP
// Lookup the environment reflection texture with the right vector.
reflColor = texCUBE(_Cube, localCorrReflDirWS);
// Lookup the texture color.
float4 texColor = tex2D(_MainTex, float2(input.tex));
return _AmbientColor + texColor * _ReflAmount * reflC
In the previous code for the fragment shader, the magnitudes _BBoxMax and _BBoxMin are
the maximum and minimum points of the bounding volume. The variable _EnviCubeMapPos is the position where the cubemap was
created. Pass these values to the shader from the following script:
[ExecuteInEditMode]
public class InfoToReflMaterial : MonoBehaviour
// The proxy volume used for local reflection calculations.
public GameObject boundingB
void Start()
Vector3 bboxLenght = boundingBox.transform.localS
Vector3 centerBBox = boundingBox.transform.
// Min and max BBox points in world coordinates
Vector3 BMin = centerBBox - bboxLenght/2;
Vector3 BMax = centerBBox + bboxLenght/2;
// Pass the values to the material.
gameObject.renderer.sharedMaterial.SetVector("_BBoxMin", BMin);
gameObject.renderer.sharedMaterial.SetVector("_BBoxMax", BMax);
gameObject.renderer.sharedMaterial.SetVector("_EnviCubeMapPos", centerBBox);
Pass the values for _AmbientColor, _ReflAmount, the main texture, and cubemap texture to the
shader as uniforms from the properties block:
Shader "Custom/ctReflLocalCubemap"
Properties
_MainTex ("Base (RGB)", 2D) = "white" { }
_Cube("Reflection Map", Cube) = "" {}
_AmbientColor("Ambient Color", Color) = (1, 1, 1, 1)
_ReflAmount("Reflection Amount", Float) = 0.5
#pragma glsl
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
// User-specified uniforms
uniform sampler2D _MainT
uniform samplerCUBE _C
uniform float4 _AmbientC
uniform float _ReflA
uniform float _ToggleLocalC
// ----Passed from script InfoRoReflmaterial.cs --------
uniform float3 _BBoxM
uniform float3 _BBoxM
uniform float3 _EnviCubeMapP
struct vertexInput
float4 vertex : POSITION;
float3 normal : NORMAL;
float4 texcoord : TEXCOORD0;
struct vertexOutput
float4 pos : SV_POSITION;
float4 tex : TEXCOORD0;
float3 vertexInWorld : TEXCOORD1;
float3 viewDirInWorld : TEXCOORD2;
float3 normalInWorld : TEXCOORD3;
Vertex shader { }
Fragment shader { }
The algorithm to calculate the intersection point in the bounding volume is
based on the use of the parametric representation of the reflected ray from the local
position or fragment. For a description of the ray-box intersection algorithm, see .
Filtering Cubemaps
One of the advantages of implementing reflections using local cubemaps is the
fact that the cubemap is static. That is, it is generated during development rather than
at run-time. This provides an opportunity to apply any filtering to the cubemap images
to achieve an effect.
CubeMapGen
CubeMapGen is a tool by AMD that applies filtering to cubemaps.
You can obtain CubeMapGen from the AMD developer web site at: .
To export cubemap images from Unity to CubeMapGen you must save each cubemap
image separately. For the source code for a tool that saves the images, see . This tool can create a cubemap and
can optionally save each cubemap image separately.
You must place the script for this tool in a folder called Editor in the Asset directory.
To use the cubemap editor tool:
Create the cubemap.
Launch the Bake Cubemap Tool from GameObject menu.
Provide the cubemap and the camera render position.
Optionally save individual images if you plan to apply filtering to the cubemap.
Figure 5-9 Cubemap Bake tool interface
You can load each of the images for the cubemap separately with
CubeMapGen.
Select what face to load from the Select Cube
Face drop down menu and then press Load
Cubemap Face button. When all faces have been loaded it is possible to
rotate the cubemap and check that it is correct.
CubeMapGen has a number of different filtering options in the Filter Type drop down menu. Select the filter settings
you require and press Filter Cubemap to apply
the filter. The filtering can take up to several minutes depending on the size of the
cubemap. There is no undo option so save the cubemap as a single image before applying
any filtering. If the result of the filtering is not what you expect you can reload the
cubemap and try adjusting the parameters.
Use the following procedure for importing cubemap images into CubeMapGen:
Check the box to save individual images when baking the cubemap.
Launch CubeMapGen tool and load cubemap images following the relations shown in the following
Save cubemap as a single dds or cube cross image. Undo is not available so this enables you
to reload the cubemap if you are experimenting with filters.
Apply filters to cubemap as required until the results are satisfactory.
Save the cubemap as individual images.
The following table shows the equivalence of cubemap face index between
CubeMapGen and Unity.
Table 5-2 Equivalence of cubemap face index between CubeMapGen and Unity
AMD CubeMapGen
The following images show CubeMapGen after loading the six cubemap images
and after applying a Gaussian filtering to achieve a frosty effect.
Figure 5-10 CubeMapGen
Figure 5-11 CubeMapGen showing frosty effect
The following table shows the filter parameters used with the Gaussian filter
to achieve the frosty effect.
Table 5-3 Parameters used in CubeMapGen to produce a frosty effect in the
reflections.
Filter settings
Figure 5-12 Reflection with frosty effect
The following images summarize the work flow to apply filtering to Unity
cubemaps with the CubeMapGen tool.
Figure 5-13 Cubemap filtering workflowUnity着色器常用关键字及属性-提供留学,移民,理财,培训,美容,整形,高考,外汇,印刷,健康,建材等信息_突袭网
当前位置&:&&&&Unity着色器常用关键字及属性
热门标签:&
Unity着色器常用关键字及属性
来源: 由用户
编辑:王亮
看了好多次shader,不过还是感觉无从下手的去写shader,然后看shader的效果
个人认为还是因为shader基础打得不够扎实,顾特意再重温shader相关书籍的时候,看见有些常用关键字时,把他记录下来
希望这对学习unity shader的同学来说有些用
这是shader的根命令
这个是子着色器,一个Shader里面可以有多个SubShader
Properties
name(“display name”,Range(min,max)=number)
定义一个范围值
name(“display name”,Color=(number,number,number,number)
定义颜色,number取0~1
name(“display name”,2D)=”name”{option}
定义2D纹理,缺省值:”white”,”black”,”gray”,”bump”
name(“display name”,Rect)=”name”{option}
定义矩形纹理(非2的n次幂),缺省值为2D纹理属性
name(“display name”,Cube)=”name”{option}
定义立方体纹理,缺省值同位2D纹理
name(“display name”,Float)=number
定义一个浮点数
name(“display name”,Vector)=(number,number,number,number)
定义四维向量
纹理属性选项(即2D等属性的{option})
自动生成纹理坐标时的模式,可以为:ObjectLiner、EyeLinear、SphereMap、CubeMap、CubeNormal
LightmapMode
光照贴图模式,如果给出这个选项,纹理会被渲染器的光线贴图所影响,即纹理不能被应用在材质中,而是使用渲染器中的设定
Queue 队列标签
Tags {“Queue”=”Transparent”}
Background
后台,这个渲染队列在所有渲染队列之前渲染,被用于天空盒之类的对象(1000)
几何体,这个是默认队列,被用于大多数对象,不透明的几何体使用这个队列(2000)
Transparent
透明,在几何体队列之后渲染,采用由后到前的次序,任何采用alpha混合的对象(不对深度缓冲产生写操作的着色器)应该在这里渲染(如玻璃,粒子等)(3000)
覆盖,这个渲染队列用于实现叠加效果,任何需要最后渲染的对象应该放置在此处(如镜头光晕)(4000)
IgnoreProjector 忽略投影标签
每个通道都能使几何体被渲染一次,一个Subshader里面可以有多个Pass
两个特殊通道
UsePass “Shader/Name”
插入所有来自给定着色器中的给定名字的通道,Shader为着色器的名字,Name是通道的名字
GrabPass{[“纹理名”]}
捕获屏幕到一个纹理,该纹理通常使用在靠后的通道中,”纹理名”是可选的
UsePass例子:
UsePass “Specular/BASE”
Name “MyPassName”
使用Specular/BASE这个Pass,并在当前shader中重命名为MyPassName使用
所有通道名字都是大写开头,因此UsePass必须使用大写开头的名字来书写索引
GrabPass捕获当前屏幕内容到一个纹理中,纹理能在后续通道中通过_GrabTexture进行访问
通道渲染设置命令
Material{材质块}
定义一个使用顶点光照管线的材质
Lighting 开关状态
开启或关闭顶点光照。开关状态值为On或Off
Cull 裁剪模式
设置裁剪模式,裁剪模式有Back、Front、Off
ZTest 测试模式
设置深度测试模式。测试模式有Less、Greater、LEqual、GEqual、Equal、NotEqual、Always
ZWrite 开关状态
开启或关闭深度缓冲,开关值为On或Off
设置雾参数
AlphaTest 测试模式& alpha测试值 &
开启alpha测试,测试模式有Less、Greater、LEqual、GEqual、Equal、NotEqual、Always
Blend 混合模式
设置alpha混合模式,混合模式有SourceBlendMode、DestBlendMode
Color &颜色&
设置当顶点光照关闭时使用的颜色
ColorMask 颜色值
设置颜色遮罩,颜色值可以使用RGB或A或0或任何R、G、B、A的组合,设置为0将关闭所有颜色通道的渲染
Offset 偏移因子,偏移单位
设置深度偏移,这个命令仅接受常数参数
SeparateSpecular 开关状态
开启或关闭顶点光照相关的镜面高光颜色。开关状态的值为On或Off
ColorMaterial 颜色集
当计算顶点光照时使用顶点颜色,颜色集可以是AmbientAndDiffuse,也可以是Emission
SetTexture
设置纹理,格式为SetTexture [纹理属性名] {纹理块}
混合是用来制作透明物体的
常用混合只能在RGBA模式下进行,物体的绘制顺序会影响到OpenGL的混合处理
Blend常用语法如下:
Blend 源因子 目标因子
配置并开启混合,计算产生的颜色和源因子相乘,颜色缓冲中的颜色和目标因子相乘,然后两个颜色相加
Blend 源因子 目标因子,源因子A 目标因子A
源因子,目标因子和上面相同,但是源因子A,目标因子A这些因子使用alpha通道进行混合
BlendOp 操作
不是将加入的颜色混合在一起,而是对他们做一些操作,主要操作指令有Min(最小值)、Max(最大值)、Sub(求和)、RevSub
关于混合因子
OpenGL红宝书上的解释为:
源因子:就是当前的贴图,或光照计算后生成的颜色值
目标因子:就是显示器中的帧缓存的图
常用混合因子(对源因子和目标因子都有效)如下:
值为1,使用此设置来让源或是目标颜色完全地通过
值为0,使用此设置来删除源或目标值
此阶段的值是乘以源颜色的值
此阶段的值是乘以源alpha的值
此阶段的值是乘以帧缓冲区目标颜色的值
此阶段的值是乘以帧缓冲区目标alpha的值
OneMinusSrcColor
此阶段的值是乘以(1-源颜色)
OneMinusSrcAlpha
此阶段的值是乘以(1-源alpha)
OneMinusDstColor
此阶段的值是乘以(1-目标颜色)
OneMinusDstAlpha
此阶段的值是乘以(1-目标alpha)
以下是常见的混合类型:
Blend SrcAlpha OneMinusSrcAlpha //混合
Blend One One //混合相加
Blend One OneMinusDstColor //柔和相加混合
Blend DstColor Zero //相乘混合
Blend DstColor SrcColor //2倍相乘混合
通道标签只能在通道(而不是子着色器)中使用
光照模式标签,它定义了光照管线中的通道的角色。比较少用,大多数情况下需要和光照进行互动的着色器会被写成表面着色器
常用光照模式标签表如下:
总是渲染,没有光照应用
ForwardBase
环境光,主要的定向光,顶点/SH光被应用
ForwardAdd
附加的逐像素光照被应用。每个光照1个通道
PrepassBase
渲染法线/镜面指数
PrepassFinal
使用纹理,光照和自发光混合出最终的颜色
当物体没有光照映射使用顶点光照渲染,所有顶点光照被应用
VertexLMRGBM
当物体有光照映射时使用顶点光照渲染,在平台上光照映射是RGBM编码
当物体有光照映射时使用顶点光照渲染,在平台上光照映射是double-LDR编码(移动平台,以及老师CPU)
ShadowCaster
将物体当做阴影产生者来渲染
ShadowCollector
为了正向渲染路径将对象的阴影收集到屏幕空间缓冲区中
- RequireOptions
- 当某些额外条件满足时,该通道才会被渲染,它的值是SoftVegetation,需要在Quality Settings中开启SoftVegetation才渲染通道
降级,如果没有子着色器能使用,将运行降级着色器,用法如下:
Fallback “着色器名” //回归到指定着色器
Fallback Off //不降级,也不会出现任何出错报警,没有子着色器也会继续运行
分类是渲染命令的逻辑组,大多数情况下用于继承渲染状态
#pragma surface 表面函数 光照模式 [可选参数]
表明那些CG函数中有表面着色器的代码,这个函数的格式如下:
void surf(Input IN,inout SurfaceOutpur o)
在光照模式中使用,内置的光照模式有Lamberr(漫反射)和BlinnPhong(高光)
可选参数列表如下:
Alpha混合模式,使用它可以写出用以渲染半透明效果的着色器
alphatest:VariableName
Alpha测试模式,使用它可以写出镂空效果的着色器。楼口大小的变量(VariableName)是一个float型的变量
vertex:VertexFunction
自定义的顶点函数,VertexFunction为函数名
finalcolor:ColorFunction
自定义的最终颜色函数,ColorFunction为函数名
exclude_path:prepass或exclude_path:forward
使用指定的渲染路径,不需要生成通道
添加阴影投射并且集合通道,通常用于自定义顶点的修改,使阴影也能投射在任何过程的顶点动画上
dualforward
在正向(forward)渲染路径中使用双重光照贴图(dual lightmap)
fullforwardsshadows
在正向(forward)渲染路径中支持所有阴影类型
添加贴花着色器
decal:blend
混合半透明的贴花着色器
softvegetation
使表面着色器仅能在Soft Vegetation打开时进行渲染
不使用任何环境光照(ambient lighting)或者球面调和光照(spherical harmonics lights)
novertexlights
在正向渲染(Forward rending)中不使用球面调和光照(spherical harmonics lights)或每顶点光照(per-vertex lights)
nolightmap
在这个着色器上禁用光照贴图(lightmap)
nodirlightmap
在这个着色器上禁用方向光照贴图(directional lightmap)
noworwardadd
禁用正向渲染添加通道,这会使这个着色器支持一个完整的方向光和所有逐顶点/SH计算的光照
approxview
当着色器有需要的时候,计算每个顶点,而不是每个像素的方向,这样更快,但是当摄像机接近表面时,视图方向不完全正确
halfasview
在光照函数中传递进来的是half-direction向量,而不是视图方向向量,Half-direction计算且将每个顶点标准化,这样更快,但不完全正确
标准的表面着色器输出结构体如下:
struct SurfaceOutput {
half3 A //反射光颜色
half3 N //法线
half3 E //自发光,用于增强物体自身的亮度,使之看起来好像可以自己发光
half S //镜面高光
half G //光泽度
half A //透明度
表面着色器的输入结构体Input包含着所有着色器所需要的纹理坐标和附加值
纹理坐标必须在纹理名称前面加上”uv”,或者以”uv2”作为头命名来使用第二个纹理坐标集
附加值按需要加入Input中,常用附加值如下所示:
float3 viewDir
视图方向,为了计算视差、边缘光照等效果,Input需要包含视图方向
float4 with COLOR semantic
每个顶点颜色的插值
float4 screenPos
屏幕空间中的位置,为了获得反射效果,需要包含屏幕坐标
float3 worldPos
float3 worldRefl
世界空间中的反射向量。如果表面着色器不写入法线(o.Normal)参数,将包含这个参数
float3 worldNormal
世界空间中的法线向量。如果表面着色器不写入法线(o.Normal)参数,将包含这个参数
float3 worldRINTERNAL_DATA
内部数据。如果表面着色器写入了o.Normal,将包含世界反射向量。为了得到基于每个像素的法线贴图的反射向量需要使用世界反射向量(WorldReflection(IN,o.Normal))
float3 wroldNINTERNAL_DATA
内部数据。如果表面着色器写入了o.Normal,将包含世界法线向量。为了得到基于每个像素的法线贴图的法线向量需要使用世界反射向量(WorldReflection(IN,o.Normal))
RenderType
在一个子着色器的Tags中可以设置RenderType,格式如:
Tags {“RenderType” = “Opaque”}
而RenderType的参数如下所示:
Opaque: 用于大多数着色器(法线着色器、自发光着色器、反射着色器以及地形的着色器)。
Transparent:用于半透明着色器(透明着色器、粒子着色器、字体着色器、地形额外通道的着色器)。
TransparentCutout: 蒙皮透明着色器(Transparent Cutout,两个通道的植被着色器)。
Background: Skybox shaders. 天空盒着色器。
Overlay: GUITexture, Halo, Flare shaders. 光晕着色器、闪光着色器。
TreeOpaque: terrain engine tree bark. 地形引擎中的树皮。
TreeTransparentCutout: terrain engine tree leaves. 地形引擎中的树叶。
TreeBillboard: terrain engine billboarded trees. 地形引擎中的广告牌树。
Grass: terrain engine grass. 地形引擎中的草。
GrassBillboard: terrain engine billboarded grass. 地形引擎何中的广告牌草。
顶点片元着色器
顶点片元着色器不使用灯光交互,实际着色器代码用CG或HLSL语言编写,嵌入在着色器的通道块中,常用编译指令表如下所示:
pragma vertex
声明以name为名字的函数是顶点程序
pragma fragment
声明以name为名字的函数是片元程序
pragma fragmentoption
添加选项到编译的OpenGL片元程序,通过OpenGL的ARB_fragment_program可以查询到所允许的规范的选项列表,这个指令对顶点程序或者不是以OpenGL为编译目标的程序无效
pragma target
着色器目标编译
pragma only_renderers
仅用给定的渲染器编译着色器。默认情况下用所有的渲染器都编译着色器。当选择多个渲染器时用空格隔开
pragma exclude_renderers
不用给定的渲染器编译着色器。默认情况下用所有的渲染器都编译着色器。当选择多个渲染器时用空格隔开
pragma glsl
用桌面OpenGL平台编译着色器时,将Cg/HLSL转换成GLSL而不是默认的ARB顶点片元着色器
在only_renderers和exclude_renderers中的渲染器如下:
渲染器指令代码
Direct3D 9
OpenGL ES 2.0
PlayStation 3
使用方法:#pragma only_renderers opengl
注意:每个顶点片元着色器必须包含一个顶点程序或者片元程序(或两者兼有),因此需要一个#pragma vertex命令或者#pragma fragment命令(或两者兼有)
顶点结构成员
float4 vertex
float3 normal
float4 texcoord
第一UV坐标
float4 texcoord1
第二UV坐标
float4 tangent
切线向量(用在法线贴图中)
float4 color
每个顶点(per-vertex)颜色
更多精彩 >>>

我要回帖

更多关于 笔记本内存超频 的文章

 

随机推荐