Houdini Week 4

This week, we’re focusing on Houdini’s materials, lighting and rendering.

Displacement rendering

Displacement maps are usually used to represent the height fluctuations of objects in the rendering.
The effect is usually to move the position of the point along the normal of the surface by a distance defined in the map.
It gives texture the ability to express detail and depth.
It can also allow self-covering, self-projection, and edge contour rendering.
On the other hand, compared with other similar technologies, this technology consumes the most performance because it requires a lot of additional geometric information.

Mantra is a highly advanced renderer included with Houdini. It is a multi-paradigm renderer that implements scan lines, ray tracing, and physically-based rendering. You should use a physically-based rendering engine unless you have a good reason to use another engine. Mantra is deeply integrated with Houdini, such as efficient rendering of packed primitives and volumes.

Lighting

Ambient lighting adds light to the scene as if it came from a sphere surrounding the scene. Usually, the light is colored using an image called an environment map. Environment maps can match the lighting (and reflections) of the scene to the real-world position, and can also be used to add interesting changes to the lighting of the scene.

HDR mapping refers to environment mapping with high dynamic range in image software such as 3D. Generally speaking, HDR texture is a “seamless texture” composed of “HDR photos” (seamless texture is a picture where the edges of the picture are connected up, down, left and right, no seams or traces can be seen). HDR textures are generally natural Scenery or indoor environment.

The HDR map has a high dynamic range of the illumination information data image, the normal image is 8bit, and the HDR map is 32bit. In other words, he has more grayscale details and richer details. High dynamic range images are closer to the dynamic range of the human eye, and even exceed the human eye. In short, it is a photo with rich light and dark details. This is an image processing software that makes up for the lack of camera dynamic range, and can take multiple photos with the same position but different exposures at the same time.

8-bit color:

In television, each individual value represents a specific color in a color area. When we talk about 8-bit color, we basically mean that the TV can represent colors from 00000000 to 11111111, with 256 color variations for each value. Since all TVs can represent red, green, and blue values, 256 variations of each color means that the TV can reproduce 256x256x256 colors or a total of 16,777,216 colors. This is considered VGA and has been used as a standard for TVs and monitors for many years.

10-bit color:

10-bit color can represent each of the red, blue, and yellow colors between 0000000000 and 1111111111, which means that it can represent 64 times the 8-bit color. This can reproduce 1024x1024x1024 = 1,073,741,824 colors, which is definitely much more than 8-bit colors. For this reason, many gradients in the image will look smoother, as shown in the image above, 10-bit images are obviously better than 8-bit images.

12-bit color:

The 12-bit color range is from 000000000000 to 1111111111, making the color scale range of 4096 versions of each primary color, or 4096x4096x4096 = 68,719,476,736. Although this is technically a 64 times wider color range than 10-bit color, the TV must produce a bright enough image to see the color difference between the two.

Light object:

Point — A light that emits light from a specific point in space defined by a light transformation.

Line-The line of light from (-0.5, 0, 0) to (0.5, 0, 0) in the space of light.

Grid-a rectangular grid from (-0.5, -0.5, 0) to (0.5, 0.5, 0) in the light space.

Disc-a disc-shaped lamp. The disc is a unit circle in the XY plane in optical space.
Sphere-spherical light. The sphere is a unit sphere in the space of light.

Tube-Tube light. The first parameter of Area Size controls the height of the tube, and the second parameter controls the radius.


Geometry — Use the object specified by the geometry object parameter to define the shape of the area light.


Far-A directional light source at infinite distance from the scene. Light sources in the distance will cast clear shadows, so depth map shadows can be used.


Sun-a finite size (non-point) directional light source at infinite distance from the scene.

Sunlight is similar to high beams, except that they produce penumbras-similar to the actual sun with soft shadow.

Arnold

Houdini Week 3

This week we will try to make a destructive effect, and I will use the wooden house model I made in the first week for destructive testing.

First, we need to understand Voronoi.

Voronoi is the closest thing we have found to make some natural or destructive shapes. The way it works is basically that we spread a lot of points, and then draw a line between every two points, and this line stops where there is another line between the other points. This effect is exactly what we want to achieve in the 3D crushing process. Broken irregularities will give the effect a more realistic texture.

The first way

Voronoifracture + Explodeview node

When you uncheck Distance VDB in vdbfrompolygons, and then select Fog VDB, the state of the model will change.

Create an isooffset node. Connect the isooffset node directly to the sphere node. At this point, you can observe the change in the shape of the object.

The second way

Add scatter/grid/copy to points and attrirandomize

Modify the scale of the transformation and the uniform scale of the exploded view.

The third way

Add rdbmaterialfracture

Adjusting the value of Scatter Points in rbdmaterialfracture will increase the number of fragments.

Houdini Week 2

The task for the second week is to start visiting the particle system and start some preview videos.

Auto update]: When we change the value of a node in Houdini, it will automatically (and very quickly) update the effect on the screen.

[On mouse up]: When we change the value of a node in Houdini, it will only update when we click on the window with the mouse.

[Manual]: Manual update. This can be interpreted as an independent update of the window effect when we change the value of the node. (This mode is very convenient. I have been using this mode for my last homework.)

[Null] Node: This node has no effect. But it is easier to create a [Null] node in the panel to observe the process. (Maybe because the shape of this node is too unusual. Most of the other nodes are rectangular, and the [Null] node is a shape similar to X).

Vop and Vex

First create a box, select the node and press I to enter
Create an attribute wrangle node and connect it to the box node
Select the attribute wrangle node, press P to open the parameter panel, and enter the vex code in the vexpression
Click on vexpression and press Alt + E to open the vex code editor

[Gravity] is something common to all solvers.

Put [gravity] under [popsolver]. At this point, we can see that the particles on the [testgeometry_crag] model have a downward trend.

There are different types of wind in [popnet]

Increase the value of [Wind Velocity] in [popwind], the particles will change the direction of movement.

[Rendering animation book]

Render Houdini files.

Create a [popforce] node. Changing the value of [Amplitude] in [popforce] can be similar to [noise wind] and can affect the direction of particle movement.

Houdini Week 1

Houdini is a brand new software for me, and it is a new challenge for me. Maya is still in the stage of self-learning.

The main study this time is

Different from MAYA, the Houdini interface allows us to move or rotate the interface directly with the mouse.

Many operations can be performed in the object panel on the right

Enter the name of the object to be created in the [TAB] search panel, select the object to be searched, and create it directly. Remember to select the object you want to create, then right click on it in the properties panel.

Created a base sphere. Then follow the tutorial video and create a box in the same way. If you want to switch between viewing boxes, you can click the blue mark on the right side of the [Sphere] panel. After hiding the sphere, you can observe the box.

[Parameter] in Houdini can be displayed by clicking [P] on the keyboard.

Different from MAYA, the project folder in Houdini does not have [Scene]. Scene data in Houdini is usually very small. In most cases, the main file is stored in [Desktop Folder].

Rock practice.

Create [Sphere] first, then select [Polygon] in [Primitive Type].

Select [Flat shaded] to observe [sphere].

Change the location attribute of the noise [Attribute Names] to [P].

Add [null] node. [null] has a certain shape, which helps us find the current item more easily.

Change the values of [Noise Type] and [Element Size] in [atribnoise]. You can also add more details in [Fractal] and [Deformation].

The ultimate rock.

Making of the wooden house

Create two [geo], create [box] and [testgeometry_tommy1] respectively.

Adjust the value on the [box] property so that the model will stay on the horizon when you increase [Size] on the y-axis.

Select points and create transform nodes

Alt+drag = copy , Create boolean

Polyextrude-thickness, Reverse-normal reverse , Roof-two extrusions

The ultimate wooden house.

Week 5 Adjust

This week is mainly for testing the game. We imported the animation into it and tested it, and found that it lacked a little transition effect, so we modified it.

After the first test, I found that the frame rate was a bit high and the jitter frequency was too fast, so we reduced the frame rate.

In general, our project is proceeding fairly smoothly. We are always looking for ways to make team collaboration closer, so that cross-professional connections become coordinated. It is very important to communicate and position within the team in a timely manner in order to clearly define the common goal of the team. Having a common goal will increase cohesion. All members are working hard, helping each other and cooperating with each other.

In the process of this cooperation, I feel that the team is a group of interdependent individuals, working hard to achieve a common goal and urging each other. The team needs to use resources to improve productivity and clarify the roles in the team. Everyone knows what they should do, but the leaders in the team are always changing because everyone’s role is very important in the development process. Meaning, sharing decision-making and responsibilities with each other, and promptly putting forward requirements and modifying one item, is to enable a team to have better cooperation

The skills I learned include the production of 2D animation, teamwork and cooperation, and learn from each other with the team members to help each other.

Week 4 Modify and import

This week I mainly modified the issues discussed during the first painting.This is the last modification I finished.

I encountered some problems during the export process. In one version of the export, the dust cannot be seen.

When the Alpha threshold value is the largest, the layer with transparency cannot be seen, and when the value is 0, a black border will appear.

I also encountered a problem in the process of importing unity. At first I didn’t know that unity does not support gif format files. Later I found a website (https://ezgif.com/)that can convert GIF format to PNG in order, which solved the import problem smoothly.

Week 3 Character design

Regarding the character, we chose to be a cat. After reading some references, I decided to use Lulu’s big tail, big eyes, small ears and a piece of black on Pikachu’s tail to create this kitten.

At the beginning of the painted version, we tested it and found that the lines were too thin and some details could not be viewed well, such as falling dust.

This action was originally designed to rotate 90°, but later found that the angle is too large and the effect is poor

In order to express the feeling of being suddenly scared, after turning the head and using the small symbol next to it, the feeling is still not obvious, so I added a frame larger than the normal state, and the next frame returns to the normal size. There is an effect of recycling, and the mood change is more natural

In addition, in order to have the effect of jittery lines, all the previous paintings were all painted once, and then there were a few changes that only changed the moving parts, and I felt that it could highlight the key points.

This is the initial state of the cat. As the sound changes, I designed some actions to match the game.

With a slight voice, the kitten does not respond irritatingly and appears docile.

This is to be irritated by the sound to cover the eyes.

Under moderate stimulation, the kitten pulls up the clothes.

Stimulus continued to strengthen, hid under the table, his tail stood up, indicating that he was frightened.

Finally, the cat fainted under the stimulation.

Because I’m doing a 2D animation, every frame of animation must be drawn frame by frame. For the smoothness of the picture, I chose 12 frames per second.

Week 2 Second discussion

  1. The members of the group know each other and clarify our major and skills.
  2. We decided to choose the interactive animation project, hoping to make animation in a creative interactive way.
  3. The initial idea is to use sound to interact. The protagonist of the animation is a running small animal. The louder the user input, the faster the small animal runs.

Chenge Yang is responsible for the program development of the game prototype, using the Unity engine to realize the sound control animation interaction function and the sound input function. That is, the technology that supports the real sense of sound mentioned in the last meeting, using microphone radio and combined with the function of the mouse and keyboard

Luo Tang is responsible for the protagonist’s image design and animation production, unified style, uses Procreate to complete the drawing of 2D animation, and uses https://ezgif.com/ to export the animation to PNG format as required, and cooperates with the team members to successfully import unity.

Ziqi Wang is responsible for the protagonist’s image design and ICON, UI design, and drawing prototype sketches.

After this meeting we decided to make a game where the louder the sound, the more scared the animals.

Our game called Frightened Animal

Voice control games are a genre that has become more popular in recent years. They generally do not require high player operations, but they require you to have a good voice and a cheeky who is not afraid of being beaten because of disturbing the people.

This game does not need much sound. Because in the game you will become a great magician, all operations need to be completed by yourself. The spells are in English and the game’s own language. Imagine yourself yelling “Let there be light!” Then the surroundings will light up. Are you still a little excited.

In this horror game, players can only “see” the surroundings by making sounds in the darkness. The biggest feature of the game is that it uses a microphone to pick up the sound. Enemies hiding in the dark can also “hear” the player’s hiding place. You have to consider when to make a sound to locate and when to hide quietly. If the player puts the microphone close to the nose, the tight breathing sound can make the overall gaming experience more challenging. Playing horror games with sound is simply too real.

Don’t look at the cuteness of the game “Ocean Rabbit”, it is actually very torturous. As the rabbit Bunpu who loves carrots the most, the player decided to jump into the sea alone to make a pilgrimage to the “Carrot Island” for the holiday destination in his mind. Along the way, he will encounter rocks blocking the way and delicious carrots. Whether it is avoiding or eating carrots, You need the player to shout and jump to complete it. It depends on whether your voice is loud enough.

Through this meeting, combining the range of pitch that can be selected, and the reactions of animals in the program “Animal World” that have been watched on electronic media in the past, when facing crises, I proposed several commonly used and selectable ones. In terms of whether to make 2D or 3D games, we decided to choose 2D as the representation of the specific screen.

In the end, after discussion, we decided that the protagonist chooses a cat.Here are some sources of inspiration for my creation.

Week 1: collection and feedback

After several days of data collection, the team members checked the popular voice-activated games on the market today and summarized several feasible ways to play. For example, it supports real-sounding technology, uses a microphone to receive voice, allows players to use preset voice commands, and combines the auxiliary functions of keyboard and mouse.Combine collected information and cases, such as “Don’t Stop! Eighth note sauce” which is popular in Japan and major social platforms. this game also takes sound interaction as part of the core elements of the game. Finally, we decided to make a game where the louder the sound, the faster the animal runs.

“Don’t stop! Eighth note sauce”, a game that has become popular in Japan and major social platforms, also uses sound as a part of the core element of the game.

Game introduction

The game is a sound interactive game. The user controls the volume by manipulating buttons, decibels, and small animals will react differently with the volume decibels.

Small animals are timid by nature, and only emerge when there is no sound interference. If there is a sound, you will hide immediately.

The game draws inspiration from classic games such as peekaboo and gophers. In essence, it explores the way animals protect themselves, that is, psychological defense mechanisms.

There is a hierarchy of decibels. The louder the sound, the greater the reaction of the animal. For example, a small animal will be scared to walk at level one, run away in fear at level two, run wild at level three, and faint at level five.

The player consciously interferes with the behavior of small animals by playing a role with God’s perspective behind the screen.

Psychological defense mechanism-the way of self-protection of animals

Self-defense seems to be the instinctive reaction of every creature when its own safety is threatened. In order to adapt to the environment and avoid threats and attacks, there are many ways for animals in nature to protect themselves, such as hiding, deceiving, deterring, self-defense, counterattack, and escape.

Darwin believes that it is not the strongest of the species that survive, but the one most responsive to change. The game pays attention to and quotes the self-defense behavior of animals and puts it into practice.

For example, when human beings encounter painful and uncomfortable things, they will choose entertainment and recreation, too much rest to relax, and temporarily escape the urgency of high pressure.

The avoidance behavior of animals is actually a kind of psychological defense mechanism at work. It is a passive defense. The essence is to use avoidance and negative methods to reduce the pain of frustration.

Lighting Week3

Image-based lighting (IBL) settings (2)

First capture the environment map. This image can be taken by the camera in the real world (HDR is recommended for better results). It can also be rendered in real time via the camera in the game.

AOV

The light in one space allows us to decompose the beauty and render it into multiple light renderings. In addition, for each lighting rendering, we can decompose it into food according to the shading components (diffuse component specular component and subsurface ghetto component).

Render separately

Select Transform> SphericalTransform to insert a SphericalTransform node after the HDR image. You can use this node to convert HDR images to spherical map images. In the control of the node, select Input Type and Output Type.

Add contact shadows and reflections