New vray for rhino manual
So they don ' t have to spend a lot of time adjusting lighting location and brightness. The concept of GI is very simple. Imagine a room has a window but no light in it. The natural light from the outside of the room comes in through the window so the room doesn ' t look completely dark even though there is no light in it.
Some people even call this the lazy boy lighting. Its purpose is allowing the users to have the most natural light possible without spending too much time to achieve it.
But this is still thousands times lower than what the sun light can produce. With the HDR file format , users can have more control ranging from dark to bright. The HDR is a very special image file format. It usually starts with professional degree photography , then transforms to 96 bit full scene image by using professional HDR software.
The benefit of using HDR is that you can use this full scene image as your render light source. It also can be used as the back ground rendering. However, it is still limited when using HDR image format to describe the lighting environment. Together with other regular image file format simulated lighting environments, it usually being used only as supporting lighting for the entire scene. That means adjusting the setting of major light sources is still a very important work in V - Ray for SketchUp.
We will discuss more about how to use lighting , materials and mapping later. Post on Nov views. Category: Documents 55 download. Adding Displacement Displacement Parameters Adjusting Displacement 6 7 8 9 10 14 18 20 23 26 28 29 31 39 43 46 48 Transparency Mapping Examples 50 52 53 54 55 63 70 71 72 75 78 81 Color Mapping Choose complete setup type, click next to continue 4.
Choose destination location, click next to continue 5. Installing 6. Click the Render with V-Ray button the teapot and you should see a very bright scene with a very faint trace of the box. Note: This means that V-ray camera setting is set to take in too much light and no material has been applied to the object, which causes V-ray to assume full light reflectance, hence the blown out image.
We will trouble shoot this next. The default is to use the CPU and you can keep this setting unless you know more about how computers work to render images in which case you might be more comfortable running both. If you are rendering off of a laptop, using the CPU is most likely your best option. Interactive : When this is active, you are able to move your view in Rhino and the Frame Buffer will update with it. CAUTION: this will put a huge strain on your computer as it will constantly try to render every view until you come to a stop so be careful using this feature.
The default is to have Interactive off , and you should mostly keep it this way. Progressive : This is a new addition to V-Ray which incorporates a new way of rendering images, for now, keep this on. Basically it helps to improve the speed of your render. Quality : This switch controls the overall quality of the render. Pushing it higher puts a heavier burden on your computer while decreasing the quality draft can be quick when you are just testing an idea, lighting or need a basic render.
Update Effects: Controls how often during the render process the after render applied effect e. For a test render there is no need to use it. After changing the Quality and Denoiser switch , the quality of the image is improved.
Notice that you can only see a very faint trace of your box or completely white. Since we didn't assign the Use Mapping to adjust texture map mapping for the texture map, it will then direction. Select the teapot first, switch to follow the object's UV setting to render. Mapping dialog under Properties. Make sure the Show advanced UI is checked. Click on Add to create a new channel.
Pull down the projection menu and Click on Show Mapping to display mapping change it from surface to planar. Render and you will get image like the one below. Due to the Planar projection projects Bitmap from top down, the texture map is not Under Rotation, make the x value , then yet showing the correct direction. Rotate the left click on empty place.
This will rotate the mapping widget to change the direction of mapping widget 90 degree off the x direction projection. Image on the left shows the mapping widget rotated 90 degree; on the right is the rendered result. Rhino sets the default to Surface if there is no other projection assigned to the object and render the object according to its UV directions.
When change to a different projection, the default size of mapping widget is set to the perimeter of the object. Show Mapping is to display the mapping widget in the scene. Within the working window, the mapping widget can be moved, rotated and scaled. An object can have multiple projections at the same time, each within a different channel. Use Add to create a new channel. Use Edit to number the channel. Use Delete to delete current channel. Type of projection, size and position of Mapping Widget, project direction will all affect the Bitmap on the object.
Of course, those will also affect the final render result. Below are examples using teapot to show different UI setting. Use correspondent UI for different object. Spend some times to find exact what you want to get the ideal render result. Click on the teapot and open the Material Editor to edit the Bump map to the teapot. Same as before, select Bitmap under the Under the Maps from the right side of the Type pull down menu.
Material Editor, check the Bump and click on the "m" to open the Texture Editor. After importing the map, if the Bump map 3. For example, if Bump map is using U: 2 and V: 2, the Diffuse map should be the same. Otherwise, these two maps will not align correctly.
Also, start the Multiplier on the left side with smaller value like 0. If set the Value too high will result an unnatural look of material. The surface of the teapot looks very smooth. Image on the right is rendered with Bump map added to the teapot and its handle.
Obviously you can see the Bump texture within the brushed metal and handle. If add a little bit of Bump map to it now will make the object looks even better. Image on the left is using only Glossiness from Reflection setting. Image on the right has Bump map added to it.
Below are some examples of textures created with Bump map. Bump map is created using the grayscale of the Bitmap to set the high and low texture. The bright part of the Bitmap is considered as high part and the dark is low. The Bump map is seen more clearly at the part where the object reflects the most of the light.
Using Bump map texture to create bumped texture is only a visual effect, not the true surface of the object. Look at the edge of the object and you will still see the smooth surface. This works by using the scale between 1 and 0, this means 1 no alpha all White and 0 full alpha.
Adding Alpha contribution Open the material editor and click on the material option option tab, then change the number of alpha contribution for each material you want an alpha channel. Here are examples of different Alpha Contribution settings. This is very similar to how bump mapping works, but each method does this in a different way.
Bump mapping simply shifts the surface according to the image applied to it, without actually changing the geometric structure of the surface. This causes bump mapping to be somewhat limited in its capabilities of representing those surfaces.
Displacement on the other hand actually creates the geometry that is described by the image. This is done by subdividing a given piece of geometry and adjusting the individual heights of all of the faces based on the image that it is describing.
The result is a surface that produces a much more accurate and realistic result. Adding Displacement Using displacement is very similar to using bump mapping. In fact, you can probably use your current bump maps as displacement maps. In the Maps rollout of the material options there will be an option for Displacement.
Although textures are used for displacement maps in most situations it is possible to add a displacement map via the procedural mapping. Once either a texture or procedural mapping is added there is one last thing that you will have to pay attention to while still in the texture editor, and that is the multiplier.
The multiplier is what is actually going to determine the final size of the displacement this will reference the Amount value in the Displacement rollout. Displacement Parameters In the V-Ray for Rhino Options there is a rollout which contains the parameters for displacement.
It is important to note that these are global controls for all of the displacement through out the scene. Currently there is no individual controls on a per object or material level. The Amount value may possibly be the most important value within the rollout, as this value will determine the scale of all displacement. The Amount value is the number of scene units of an object with the texture multiplier set to 1.
This means that one could adjust the affect of displacement through either the Amount value or the texture multiplier, but because the Amount value affects all displacement, it is recommended that it be left constant and the texture multiplier be used to adjust the displacement of an individual material.
Both the Maximum Subdivisions and the Edge Length will affect the quality and speed of the displaced mesh. Maximum Subdivisions will control the amount of subdivided triangles that are allowed to be created from a single triangle of the original mesh. In general, it is better to have a slightly denser mesh and lower maximum subdivision rather than a simpler mesh and a higher maximum subdivision.
Depending on density of the render mesh created by Rhino, the max subdivisions may not necessarily come into play. The edge length will determine the maximum length of a single triangle. By default this value is expressed in pixels, but if you disable View- Dependant then the edge length value will reference your scene units.
Smaller values will lead to a higher quality, while larger values will decrease the quality. The first way, which is the simplest, is to keep the Amount value in the displacement options at 1 and to adjust the texture intensity as an expression of scene units. The plane on the left has a texture multiplier of. The plane on the left has a multiplier and maximum displacement of 2.
Example 1 The second way to set up displacement by making the maximum displacement the Amount Value in the V-Ray options and setting the texture multipliers as a percentage of that maximum value. It the case of the two planes to the right the Amount value is two. You will notice that the rendered image is the same in Example 2 both cases. That is because it is does not matter the method that is chose, only that the multipliers are inline with the desired effect.
The image on the left is an example of the different quality settings for displacement. The plane on the left has an Edge Length of 24 pixels and a Maximum Subdivision of 6. The plane on the right has an Edge Length of 2 pixels and a Maximum Subdivision of Here is a comparison of bump mapping left and displacement right.
Both the maps and intensities are the same. As you can see the bump map is limited in its ability to create the depth that is capable with displacement. Transparency mapping is another method using Bitmap to create materials. The difference is that this is using alpha channel to get rid of unwanted part of the Bitmap, saving only the part covered by alpha channel.
This is called a mask. This is used mostly for creating product logos, stickers and numbers. Many users try to avoid using transparency mapping and model the actual object in the scene. Although you can ignore material settings by creating the actual model of objects, that will increase both the number of objects in the scene and the file size. The more objects you get, the longer the rendering time you would need.
You will get the result as left image if you apply the texture map directly without transparency map. The black background of the texture map is blocking part of the cup. The image on the right is rendered with transparency map. Here is the object and the transparency map that we will use to create our label. Click on the cup and open its Material Editor. A Diffuse1 control panel is added under the Diffuse. Load the Bitmap for Transparency texture map.
Make sure you uncheck the Tile first to avoid repeating this Bitmap on the object Use Photoshop, PhotoImpact and similar image editing software to create black and white image and save as. Use Diffuse1 color to edit the color for this Transparency map. The this map if needed. Transparency map is covering the entire cup. That's because there is no mapping applied to this cup yet.
If the Tile remains checked, it will render as the image on the right. How Transparency Mapping works The diagram below depicts transparency mapping.
The idea is using a grayscale image as mask, black area will not be penetrated and only white area will let light through, other gray area will then become translucent. The white area get the color assigned in Diffuse1 and end up showing on the surface of the object after rendered. For the cup example above, after assign a mask to the Transparency, the red color of the cup is affected by the white area of the mask and no longer showing red.
The second layer of the Diffuse1 color at the diagram below is used to cover the white area of the first layer. Another method for creating the same result Totally opposite from the method above, set the Diffuse to white, Transparency as mask, but switch the black and white area, and let the Diffuse1 color show up at the upper layer.
Assign the Diffuse1 color to red and will get the same result as above after rendered. Here are some more examples often used to create a texture map. This is the better way then using a gradient Bitmap directly as the Diffuse texture because of its flexibility of changing the colors. You will have to make another Bitmap of different colors combination if the color need to be changed.
The second example is using other grayscale Bitmap as the transparency mask. Although it is not a gradient image, but they work exact the same. A rendered image on the right. The third example is adding another Diffuse2 layer, and assigns a 0 degree and a degree gradient Bitmaps to Diffuse and Diffuse1 in Transparency.
Give the Diffuse2 a third color to create this three- color gradient rendering effect for the cup. A ren dered image is on the right. Similar to previous examples, use a gradient grayscale Bitmap as the Diffuse Transparency mask. Add a Refraction layer to createthe half transparent and half opaque effect.
The fifth Example is the same as the third example , with the difference being a refraction layer, and changing the t ransparency color of Diffuse2 to white. This will make the white area at the middle become transparent. A rendered image is on the right. The example above can not have the transparent quality at the top and bottom of the cup because of the black color in the grayscale gradient Bitmap. The last example below is to use a pre-made gradient Bitmap as the Refraction Transparency map.
Set the Diffuse Transparency to a usual white, and then assign the Bitmap to Refraction under Refraction control panel, and you will get the same result as the image shown on the right. It works with the very simple controls so it's much easier to control the result then using a translucent material, and it renders significantly faster as well. Due to the nature of this material it is actually best to have single surfaces rather than a solid, as you would need for any refractive material.
Open the material editor and right-click on Scene materials and go to Add Material. This will in turn bring up another menu with several different material formats.
Click on Vray2SdMat which is in the middle. This is because the Two sided material works with predefined materials. There are two slots, one for the front material and one for the back material, as well as color which will determine the ratio between the front and the back material. You cannot actually create a new material once inside the Two-Sided material as it only works with predefined materials.
When you click on the button for the front material, a dialog box will open up asking you to choose which material you would like to have be the front material. You must also define a material a material for both sides, but you can define the same material for both sides. The color is how V-Ray determines the ratio of front material to back material.
The color works with grayscale values, and produces the best results between If you would like to recognize which faces are the front and which are the back, then you can configure backfaces to be a different color when they appear in the viewport. This can be configured by typing AdvancedDisplay into the command line and configuring the backface color in the desired display type. It can be very useful when creating very quick conceptual renders when trying to convey ideas with minimal modeling.
Open the material editor and right- click on Scene materials and go to Add Material. Click on VraySkp2SdMat which is the last option. It has two slots; one for the front material, and another for the back material. As with the V-Ray Two-Sided material the materials cannot be created from within the Two-Sided material, but must be already created in order to be added to either the front or the back material.
Although it Is possible to utilize much of the features of the standard V-Ray material within the Sketch-Up Two- Sided material, it is not recommended to use any refraction layers within materials used for the Two-Sided material. For whichever side does not have a material assigned, that side will not be rendered. This can be very useful for architectural visualization, and can be used to look inside rooms with the appearance of the wall still affecting the illumination of the enclosed environment.
To determine the amount of blend between two material, it uses the angle between the view direction and the surface normal. It is very useful to create materials such as velvet.
Put your dark material in the fist slot and your bright material in the second slot. The Star and stop Angle slot control de amount of blend between those materials. You simply can 't get a good rendering result without a good lighting environment.
Same as the real space lighting, light sources are divided into direct and indirect lighting. Indirect lighting refers to any lighting which is from bouncing light, or an environment Lets do a test Open file: Cup Illumination Light source is from Environment light. So far the cup and ground are using the same Val off white color.
Render it with GI default setting to 1 and get the result as image shown on the right. Increase the GI value to 2 without changing the color, the result is shown on the right. Render it again and the result is very close to the first image on the top. The reason for doing this test is to let users understand the importance between lighting and material.
Should the lighting be adjusted to accommodate material or should material be adjusted to accommodate lighting? With the second image from the previous test if we created another material and inserted into the scene, it would not render how we created it. What color is it? If you walked into a closet with no light, what would the color of your shirt be? The answer is that the color of your shirt would be the same, BUT it would appear different based on the lighting environment.
This is why you should adjust your lighting to achieve the desired affect, as opposed to changing the materials. With an incorrect lighting environment, such as the second part of the example on the previous page, it will be very hard to predict how your scene will react.
When adding an new material, it will not look how it did when you created it, thus making it harder to achieve the original intended appearance for the material. Incorrect lighting also has an adverse affect on other aspects of your rendering and may affect shadows, reflections, and even make your rendering take longer than it should.
Now you see why having a proper lighting solution is very important Interior or Exterior? When facing the task of illumination, separate it into interior illumination and exterior illumination. Here exterior means open space.
For example , place an object on the ground without any walls around it to block the light. It's easier to adjust illumination for open space. Interior means light source is blocked by wall or other similar objects in the scene; an enclose s pace in which the environment light will not have the direct effect to the object.
Or maybe some openings on the wall or windows allow part of environment light comes through them. Interior lighting is generally more complex than exterior lighting. Image on the left shows open space illumination and image on the left shows the semi-open space illumination. Image on the left shows the same semi-open space but add one more opening to the wall.
The brightness increased due to second opening added to the wall. Image on the right shows different locations for openings also affect the brightness of the scene. The number of objects, object location, material type, color and even size will all affect the illumination in some way.
When beginning to create the lighting solution it is important to have a solid base in which to begin evaluating how you will need to light your scene, as well as how it will react to lighting.
With Vray this task is very easy because of how the environment light works. Basically, with you r environment color set to white ,, and the intensity set to 1, you should get a neutral lighting of your scene.
This is useful in that it will allow you to properly assess the appearance of your materials, as well see if there are any areas of your scene which will naturally receive more or less light from the environment. Now lets see this in action. Open file Cup-Illumination This is a easy open space example, there is no light added to the scene, and the Environment color and intensity are currently set to Val and 1 respectively.
Using a white floor color is important as it will show the most amount of light that will effect the scene. This is because white allows the most amount of light energy to be retained after it bounces off a surface. With the white floor, we know that if we change its material to something that is darker, than we can expect a little less bounced light in our scene. In an exterior scene like this one the effect is minimal, but when creating an interior illumination solution this is an important thing to know.
Assign the Val color to floor, R G19 B19 2. Re-assign the R G B red color to the red color to cop first, render it and result as below. From the two images above we can see the colors for the floor and cup are rendered very close to actual colors, which means the Environment lighting is set to correct intensity and brightness for creating good illumination.
Otherwise, if the intensity is too strong, it will make the floor and cup appear brighter that the values that we set when we made the material.
Now that we have a good render we can begin the task of adding more lighting into the scene. Depending on what you are trying to create this may only required one additional light the sun perhaps or many lights.
The important thing to remember is that the lighting must be balanced. Since we already have a scene which would become overly bright, or burned as it is sometimes called, if any additional light is added there must be a compromise between the different lights. In most cases this will mean that the environment intensity will be decreased, but the ratio between environment lights and other lights is something that you must determine.
Try out different options; one where the environment light is stronger than other lights, and another where other lights are stronger than the environment. Open file Cups-HDR. Because the textures is being applied to the environment and not an object make sure you check the Environment under UVW after the file is imported. Render it and will get the image on the right. You will see a big difference between this image and the image that used only color for Environment light source.
This is because the HDR is providing the illumination for the scene based on the colors and intensities of the image. After the Background HDR is added, the result is as image on the right.
You can see the light and color change dramatically according to each HDR image. Due to the fact that HDR images are usually provided by others, the lighting environment may not product the desired effect. It may take sometime to adjust the intensity. Although HDR image has produces better results than a normal image, HDRs stll lack the true brightness of a natural environment. So normally it's used only for Environment light source, and usually some additional light is added.
Although a normal Bitmap doesn't have the same ability to create as dynamic an environment, normal images are very easy to get. As long as you pick the right Bitmap and control the Intensity well, it can still be a very good Environment light source.
The three images on the right are rendered with a different Bitmap. If you compare them to image rendered with HDR images, it is not as easy to determine the direction of the light and the shadows are not very clear. Now is time to use semi- open interior space for this example to see differences between interior and exterior illumination. Open file: GI Environment In the scene is a enclosed cube with an open on the right. There are some objects placed on the wall next to the opening wall and there is no light in the box.
All objects used Val gray color, current GI Intensity is 2, color is light blue. Render it and you will get the almost black image on the right.
The result is due to no light in the scene and only a small opening allows the Environment light come in. Increase the GI Intensity to 4 and render it again. The result is as below, a little bit Increase the GI to 8 and render again. The brighter this time result is closer to reasonable illumination. When begining to set up the illumination for interior space, a first step should be to check how many openings in the scene allow environment light to come in.
This includes transparent objects like windows or doors. It is also important to know how many lights are intended to be in the final scene, as well as what time of day the rendering is meant to depict.
These are all very helpful for setting the Environment light correctly. Very often the camera got moved during this process and the quality and brightness are not what you were expecting. Even though the interior lighting is under control, once the camera is pulled out from the box, you will get the bright white rendered result as the image on the left. It will still be necessary to add light in the room, adjust the brightness and render as image on the right.
Each engine has its own method of calculation and each with its own advantages and disadvantages. V-Ray is uses two render engines to calculate the final rendered image.
Open Indirect Illumination control panel under Options. There are Primary Engine and Secondary Engine options in the panel below. Default is set to Ir radiance Map. When switching between different engines, the control panels will also change according to the assigned engine. Clasification of Light bounces Direct light- This is the light which is calculated directly from a light source. If GI was not enabled, or if there wasn't any engin selected of either primary or secondary bounces, these rendered image would be the result of only the primary bounces.
It is not necessary to specify an engine for these calculations as these are done through standard raytracing. Environment light is not considered a form of direct light. Primary bounces- this the light which is the first bounce after the direct light hits a surface. Usually these bounces have the greatest effect on the scene in terms of the indirect lighting, as these bounces retain a significant portion of light energy. Environment light is calculated as a primary bounce. Secondary bounces- this is all of the light which bounces around the scene after the primary bounce.
As light bounces around a scene, its intensity, and therfore its affect on the final illumination, becomes less and less. Because of this secondary bounces can all be calculated through a single method. With exterior scenes these bounces have a relatively insignificant effect on the final result, however with interior scenes the bounces can become as important as primary bounces.
Open file Cups-Irradiance Map. There is a very important setting option here related to image quality: Min Rate and Max Rate. Default for Min Rate and Max Rate are -3 and 0. In this file they are currently -8 and Render it and you will get image as below. Notice that the calculation speed is very fast, but the shadow and illumination quality are low. The image include splotchiness and artifacts as well. Min Rate: the control of minimum sample for each pixel. Value of -1 means 2 pixels as 1 sample.
Value of -2 means 4 pixels as 1 sample and so on. Smaller value means fewer amount of samples been taking from the object, so the render qualities of shadow, reflection and refraction are not very good. Opposite way will result in better quality but longer render time. Max Rate: To control the maximum sample for each pixel. Smaller value means fewer total samples used to calculate the light.
Opposite will result in better quality but longer render time. Default setting of -3 and 0 represent four passes of render job. From -3, -2, -1 to 0. So you can see the Prepass 1 of 4 to Prepass 4 of 4 from the render process dialog box. According to definitions above for Min Rate and Max Rate, it doesn't mean that -8 and -5 setting will have the same result of -3 and 0,even though each has the same.
Users can have a low set of values for Min and Max Rate to render faster previews while creating the lighting and material setting in the scene. For example: -6 and -5 or -4 to Although the quality might not be good, it should be acceptable for previews. After all settings are correct, then render with higher value to get the best final quality image.
The image on the right is the final result. The image on the left is showing last prepass of -3 and The image on the left is showing -3 to 0. Image on the right is showing -3 to 1. Although the one on the right has the better final result, but the difference is very little. Subdivisions are the next means of quality control with Irradiance Map. Higher subdivisions will yie ld better quality. With Higher subdivisions it may also be necessary to add more samples.
You can see in the arangement of the irrandiance points the little white dots that the second image is much smoother. See image on the left for example. This is due to lack of Samples when calculating the Prepass. Of course , this only happens when using Irradiance Map rendering engine. The image on the left is rendered with Min Rate and Max Rate of -4 and You can see light comes through the corner clearly.
The image on the right i ncreased the value to -3 and 0 and you can see a big improvement. It is most useful for scenes with a lot of small details.
The downfall with this method is that takes significantly longer to render. Image on the left is rendered with Irradiance Map. Although the one on the right looks slightly grainy, the colors a re r eproduc e d m u c h m o r e a c c u r a t e l y with the DMC calculation. DMC generally produces a slightly grainy result.
Now change the Max Subdivisions to a higher number such as With DMC it is much easier to set up a rendering as there are very few settings that will need to be adjusted.
Artifacts such as light leaks and splotchiness will not be a factor in DMC renders. It is recommended that DMC only be used for final, or high quality te s t i m a g e s d u e t o t h e amount of time required to complete the render. Results s i m i l a r t o DM C c a n b e obtained through Irradiance Maps, usually with less time thanDMCC , so it m a y n o t b e c o m p l e t e l y necessary to switch to DMC for final images depending on the situation.
Its calculated in a way that is very similar to Photon Mapping. With Photon Mapping the calculation starts from the light source and collects light energy along way.
Light Cache starts from the camera instead. Some advantages to using Light Cache are that It doesn 't have many settings to deal with and it renders quite fast. The image on the right is slightly brighter. This is due to the fact that Light Cache calculates an infinite number of secondary bounces, where QMC only calculates a predetermined number of bounces. Although each of these bounces individually is insignificant, their added affect increases the brightness of the image.
Subdivs is the most important factor for Light Cache. Subdivs is used to decide how many light traced to use from Camera to calculate the light distribution. The actual number of traced is the square of the number of Subdivs.
As default of for example, the actual number of traced rays will be 1,, When determining how many Subdivs will be sufficient for an image, the best way is to look at the progress window, monitor the appearance of the image in the frame buffer, and approximate the number of samples according to the progress and total number of samples.
If the process is done but still have a lot of black dots in the window, that means more subdivs are needed to produce an accurate result. Image below is showing a Light Cache calculation which still has a large number of black spots.
This is used to determine the size of each sample. A smaller number will yield a more detail and a sharper image, where a larger number will lo se some detail but have a smoother result.
With each of these images the primary and secondary bounces are calculated with Light Cache. The images on the left has a sample size of. With both cases the top image is the result at the end of the Light Cache calculation and the bottom image is the rendered result. It is important to note that Light Cache is not appropriate to be used for primary bounces, as it does not produce smooth results or good details. It is only being used as a primary bounce in this case to illustrate the difference in sample size.
Scale in Light Cache In order to determine the size of each sample, Light Cache gives itself a scale to work with. The default setting for scale is Screen. This means that each sample is a percentage of the image. The default value is. Which means that the size of each sample is approximately 2 percent of the total image.
0コメント