Project 4: Cloth Simulation

CS 184: Computer Graphics and Imaging, Spring 2019

Andrew Campbell, cs184-adu

Overview

In this project, I implemented a real-time cloth simulation using a mass-spring model. I began with building the data structures to represent the cloth as a deformable mesh of point masses connected by springs. Then I added force modeling and other physical constraints combined with numerical integration over time to simulate movement. I then added code to enforce collisions with external objects as well as self-collisions. Finally, I implemented a variety of shaders to improve the appearance of the cloth.

This project was an insightful introduction to animation. At the core of everything are real-world physics formulas, but we use a lot of approximations/hacks to get desired behavior over discrete time steps. It is satisfying to see complex behavior emerge from such a simple model. I especially enjoyed implementing GLSL shaders to take advantage of the GPU for fast and beautiful lighting calculations.

Part 1: Masses and springs

Mass-spring systems are widely used in computer graphics to model deformable objects. For this project, given a sheet of cloth of desired dimensions and parameters, we divide the cloth into an evenly spaced grid of point masses, and connect each mass with various types of springs.

In Cloth::buildGrid, we populate the row-major point_masses and springs vectors. We create a grid of masses with dimension num_width_points by num_height_points spanning width and height lengths, respectively.

We then create three types of springs to represent the structual, shear, and bending constraints between point masses:

  • Structural constraints exist between a point mass and the point mass to its left as well as the point mass above it.
  • Shearing constraints exist between a point mass and the point mass to its diagonal upper left as well as the point mass to its diagonal upper right.
  • Bending constraints exist between a point mass and the point mass two away to its left as well as the point mass two above it.

Below are some views of the cloth wireframe in scene/pinned2.json, showing the structure of the point masses (which are like vertices) and springs (which are like edges).

The diagonal springs in the wireframe represent shearing constraints. In the shots below, we show a top-down view of the wireframe (1) without any shearing constraints, (2) with only shearing constraints, and (3) with all constraints.


Part 2: Simulation via numerical integration

We now numerically integrate the equations of force on the masses and springs to see how the model evolves over time. In our representation, there are two kinds of forces: external forces (e.g. gravity) which uniformly affect the cloth, and spring correction forces which apply the spring constraints.

Computing total forces on each point mass

We first compute a total external force by looping over given acceleration vectors in external_accelerations and accumulating a net force given by ; this force is uniformly applied to all point masses.

We then apply spring correction forces. For each spring, we use Hooke’s law to compute the force applied to the two masses on the spring’s ends:

where is the spring constant, and are the positions of the two masses, and is the spring’s rest length. Note that we scale ks by 0.2 for bending constraints to keep them weaker than structural or shearing constraints.

The force vector is the vector pointing from one point mass to the other with magnitude equal to ; we apply this force to one point mass, and the opposite sign force to the other point mass.

Verlet integration

Given the net force acting on each point mass, we need to perform numerical integration to compute each point mass’s change in position. Here, we use Verlet integration, which is fairly accurate and easy to implement.

Given a particle position at time , Verlet integration computes a point mass’s new position at time as follows:

where is the current velocity and is the current total acceleration. We approximate as . We also introduce a damping term into the simulation to represent energy loss due to friction; the position update equation is thus

where (usually small) is between 0 and 1.

Constraining Updates

To prevent springs from becoming unreasonably elongated, we add an additional constraint that a spring’s length is at most 10% greater than its rest length at the end of any time step. If a spring is longer that this, we modify the two point mass positions along their current direction vector so that their distance apart satisfies the constraint. Half of the correction is applied to each mass, unless one or the other is pinned.


The appearance and behavior of the cloth can be changed greatly by modifying various parameters. Shown below are final resting views of scene/pinned2.json with different values for the spring constant ks; from left-to-right we have a ks of 100, 1000, 5000 (default), and 100000. The units are .

The spring constant represents the stiffness, or resistance to deformation, of the spring. A low ks thus produces a loose cloth while a high ks produces a tight cloth less prone to stretching under its own weight.

Shown below are final resting views of scene/pinned2.json with different values for density; from left-to-right we have a density of 5, 15 (default), 30, and 50. The units are .

Increasing the density of the point masses increases the downward gravitational force on the cloth. The cloth thus sags more, which is especially evident in the folds.

Shown below are animated views of scene/pinned2.json as the cloth falls into its resting state with different damping values. From left-to-right we have a damping of 0.0%, 0.2% (default), 0.4%, and 0.8%.

Unlike ks and density, the damping parameter primarily affects the animation of the falling cloth rather than the final appearance. The damping scale controls the oscillation of the spring forces; the higher the value, the less likely the cloth is to continue moving. A low damping value produces more ripples and greater oscillation time before rest because energy is dissipated slowly.

Below is a screenshot of scene/pinned4.json in its final resting state.


Part 3: Handling collisions with other objects

The simulation is made more interesting by implementing intersections with external objects. To handle spheres, we implement Sphere::collide, which takes in a point mass and adjusts its position if it intersects with or is inside the sphere; if it is, we “bump” it up to the surface. More specifically, we compute a correction vector from the origin of the sphere in the direction of the point mass, and apply it to the position after scaling down by friction .

To handle planes, we implement Plane::collide which checks if the point mass moves from one side of the plane to the other in the last time step, and if so, “bumps” it back up to the side of the surface it originated from.

Finally, we update Cloth::simulate so that for every PointMass, we try to collide it with every possible CollisionObject.

Below are views of scene/sphere.json in its final resting state for ks=500 (left), ks=5000 (center), and ks=50000 (right).

As we saw earlier, the spring constant controls the stretchiness of the cloth. With a low ks, the cloth smoothly drapes around the sphere. With a larger ks, the cloth maintains more of its shape as the folds are more rigid.

Below is the final resting state of scene/plane.json, illustrating cloth-plane intersection:


Part 4: Handling self-collisions

We now implement self-collision code to ensure the cloth does not clip when it folds on itself. To keep the simulation real-time, we use a spatial hashing scheme so as to only consider nearby masses in computing potential interactions. Specifically, at each time step, we build a hash table that maps a float to a vector<PointMass *>. The float uniquely represents a 3D box volume in the scene and the vector<PointMass *> contains all of the point masses that are in that 3D box volume.

My hash function works by partitioning the 3D space into 3D boxes with dimensions where 3 * width / num_width_points, 3 * height / num_height_points, and . We take the mass position and truncate its coordinates to the closest 3D box, then compute where is a large prime number.

We implement self collision by using the hashmap to look up potential candidates for collision; computing pairwise correction vectors for those pairs within some threshold distance; and taking the average of all pairwise correction vectors scaled down by simulation_steps to get the final correction vector.

I had some trouble with this part getting a good hash function; the one I settled on achieved decent performance. I also spent some time debugging before realizing I needed to call build_spatial_map() at every timestep.

Below are sequential screenshots of scene/selfCollision.json as the cloth falls and folds onto itself using the default parameters.

We now explore how different parameters affect the behavior of the falling cloth. Shown below are screenshots recording the fall using ks=500.

Shown below are screenshots recording the fall using ks=50000.

As we increase the spring constant, the cloth clearly has fewer ripples as it collapses on itself. The stronger internal spring forces keep the cloth more in shape, resulting in smoother, wider, and fewer folds in the resting state.

Next, we show variations in the density. Shown below are screenshots recording the fall using density=5.

Shown below are screenshots recording the fall using density=30.

There is not a major difference, but with a smaller density, the cloth tends to spread out more as it hits the surface of the plane. With a larger density, the folds are more compressed in the resting state.


Part 5: Shaders

We now implement a few basic GLSL shader programs to give the cloth a much richer appearance. Shaders are critical components of the graphics pipeline; they are stand-alone programs that run in parallel on GPU, outputting a single 4 dimensional vector representing the color at a particular input point.

Our shaders are written in GLSL, a C like language, and have two parts:

  • Vertex Shaders apply transforms to vertices, modifying properties like their position and normal vectors. All vertex values are interpolated via barycentric coordinates across the face of the polygon. They are stored in .vert files.
  • Fragment Shaders take in interpolated geometric attributes of the fragment as computed in the vertex shader to compute a final color value. They are stored in .frag files.

Blinn-Phong Shading

We will begin with using the default vertex shader, which takes in as input the model-space attributes in_position and in_normal of type vec4, in addition to the uniforms u_model and u_view_projection, which are the matrices used to transform a point from model space into world space, and from world space to view space to screen space, respectively. It outputs two values for use in the fragment shader: v_position and v_normal.

We implement the Blinn-Phong Shading model in the fragment shader according to the equation:

where is the distance from the vertex to the light source, is the vector from the vertex to the light source, is the normal, is the halfway vector between the viewer and light-source vectors, and is the intensity of the light source. The other parameters are constants chosen according to taste.

The Blinn-Phong may appear complicated, but the components are fairly intuitive. The equation is simply the sum of an ambient light component, a diffuse component, and a specular reflection component. The ambient component provides a constant brightness; the diffuse component considers the directional impact of a light source scaled by distance; and the specular component simulates the preferential direction of the bright spots of light that appear on shiny materials.

I settled on the constant values , , , and , and just used u_color for . These were mostly chosen by experimentation and personal preference.

The individual components of the Blinn-Phong model are shown below. From left-to-right, we have the ambient component, the diffuse component, and the specular component.

Combining the three components gives the entire Blinn-Phong model:

Texture Mapping

The vertex shader provides a v_uv coordinate for use in the fragment shader. We can perform texture mapping by sampling from the u_texture_1 uniform using the built-in function texture(sampler2D tex, vec2 uv). We simply return this sample as the color value.

Below is a view of scene/pinned2.json using an image of a SpaceX Falcon launch as texture.

Displacement and Bump Mapping

Instead of using texture to determine the color of the mesh, we now use it to encode a height map. In bump mapping, we modify the normal vectors of an object so that the fragment shader gives the illusion of detail (such as bumps) on an object.

The idea is to compute the local space normals by looking at how the height changes as we make small changes in or . We compute

where is a function that returns the height encoded by the texture map at coordinates , and and are scaling factors controlled in the GUI. We get the new local normal as and convert it to model space with tangent-bitangent-normal (TBN) matrix.

We use a simple function that returns the r component of the color vector stored in the texture at coordinates .

In displacement mapping, we modify the position of vertices to reflect the height map in addition to modifying the normals to be consistent with the new geometry. In the vertex shader, we displace the vertex positions in the direction of the original model space vertex normal:

Shown below are bump mapping on the cloth and sphere using a brick texture.

Shown below is displacement mapping on the sphere using the same texture.

Bump mapping results in the visual application of texture to the cloth surface, but doesn’t actually change the geometry. Displacement mapping is an extension of bump mapping that does change the cloth mesh geometry, in addition to surface appearance by altering the vertex positions according to the grooves of the texture.

Below we compare the two shaders for different sphere coarseness values. The left shows bump mapping and the right shows displacement mapping for a sphere of resolution 16x16.

Below, the left shows bump mapping and the right shows displacement mapping for a sphere of resolution 128x128.

Using a low resolution mesh for the sphere results in a blockier, more polygon-like shape for both shaders. Displacement mapping is largely ineffective as the roughness of the surface is dominated by sharp edges due to the limited resolution. With a higher resolution sphere, there are sufficiently many vertices to be displaced to reflect the texture, as intended.

Environment-mapped Reflections

Using a given cubemap texture, we can simulate a mirror-like material. In shaders/Mirror.frag, we compute the outgoing eye ray from the camera’s position and fragment’s position. We then reflect across the provided surface normal to get and return the sampled environment cubemap at .

Below is a screenshot of the sphere and cloth draped over the sphere using the mirror texture on the default cube map.