acm-header
Sign In

Communications of the ACM

Research highlights

Where Do People Draw Lines?


pencil

Credit: Brazos Technology Group

This paper presents the results of a study in which artists made line drawings intended to convey specific 3D shapes. The study was designed so that drawings could be registered with rendered images of 3D models, supporting an analysis of how well the locations of the artists' lines correlate with other artists', with current computer graphics (CG) line definitions, and with the underlying differential properties of the 3D surface. Lines drawn by artists in this study largely overlapped one another, particularly along the occluding contours of the object. Most lines that do not overlap contours overlap large gradients of the image intensity and correlate strongly with predictions made by recent line-drawing algorithms in CG. A few were not well described by any of the local properties considered in this study. The result of our work is a publicly available data set of aligned drawings, an analysis of where lines appear in that data set based on local properties of 3D models, and algorithms to predict where artists will draw lines for new scenes.

Back to Top

1. Introduction

The goal of our work is to characterize the mathematical properties of line drawings made by human artists. Specifically, we aim to draw relationships between the locations of lines drawn by artists and properties of the surface geometry, lighting, and viewing conditions at those locations. This type of analysis can both guide the future development of line-drawing algorithms in computer graphics (CG) and provide artists and observers with a precise vocabulary for characterizing and discussing where lines on a model are drawn. This paper describes a study in which art students were asked to make line drawings (Figure 1) that "convey the shape" of 3D models shown to them as rendered images. The study balances the competing concerns of allowing the artists to draw freely and of acquiring useful data. The artists were asked to make two drawings on paper: first a freehand drawing, then a tracing to register their drawing to a faint, photorealistic image of the model. The registered drawings may be used to characterize the differences between specific artists, study correlations between human-drawn and computer-generated lines, and to provide training data for synthesis of new line drawings.

We provide a statistical analysis of the locations where artists drew lines with the geometric, viewpoint, and lighting characteristics of the underlying 3D scene. The analysis supports several conclusions. First, human line drawings, made under our controlled conditions, are quite consistent with each other. Second, most of the areas where artists consistently drew lines can be described by well-known, simple mathematical properties, such as the locations of occluding contours and large gradients of image intensity. Third, current line-drawing algorithms can help explain many of the lines that do not lie in those areas but cannot explain all the artists' lines. We believe that this paper in no way exhausts the possible investigations that can be performed with this data. We therefore make our drawings and models freely available, in hopes that other researchers continue in this line of inquiry.

* 1.1. Background and related work

Principles of drawing. Artists have for centuries studied the principles of how to make drawings. Books codifying these principles line the shelves of any major bookstore.9,16,21 Although these texts tend to emphasize the more artistically salient concerns of composition, motion, passion, and mystery, some also offer advice on using lines to convey texture, bulk, and shading, noting that even sparse line drawings are sufficient for the viewer to identify shape. Some explicitly identify known line types as candidates for drawing (e.g., contours and ridges23 or specific feature lines on a known shape such as the nose18). Nevertheless, little more is said about where on a figure to place lines in order to best convey shape—this decision making process seems to be learned through trial and error over years of practice by individual artists.

Algorithmic line drawing. Inspired by the effectiveness and aesthetic appeal of human line drawings, scientists have investigated algorithms for generating line drawings. The mathematician Felix Klein reportedly11 asked his students to draw parabolic lines (lines of zero Gaussian curvature) over a bust of Apollo, believing they would expose some aspects of the aesthetics of the sculpture. More recently, researchers have explored smooth silhouettes,10 suggestive contours and highlights,7,8 geometric ridges and valleys,17 and apparent ridges,13 (Figure 2). Each algorithm has strengths and weaknesses. Silhouette lines (more precisely, occluding contours) are obviously important, but alone they tend to leave a drawing too sparse. Suggestive contours connect to occluding contours and often lie in naturally important areas but do not exist at all on convex shapes. Geometric ridges help define convex shapes but often seem to exaggerate curvature. Apparent ridges use a more sophisticated view-dependent curvature metric but still seem to exaggerate curvature in some cases and tend to be noisy. Finally, while not explicitly designed as a line-drawing method, intensity edge detection in shaded images by algorithms like that of Canny2 is so straightforward that it is common in image editing software like Photoshop. These lines are surprisingly effective for many applications but require tuning intensity thresholds and are brittle in the face of image noise.

With the myriad line-drawing options now available, it is natural to ask which are appropriate for a specific situation, or which more closely resembles what a human would draw. Recent efforts8,13,14 include direct comparisons between their results and artists' renderings. However, these comparisons are informal and generally intended to illustrate the inspiration for the work, not to evaluate the results of the algorithm. In contrast, the data set presented here makes possible formal comparisons by directly associating 3D models, lighting, and camera angles with human drawings.

Hand-drawn data sets. Hand-drawn or hand-annotated data sets can be useful wherever visual comparison between an algorithm and human performance is desired. In computer vision, human annotations have been used as reference data for image segmentation,15 object recognition,22 and 3D model segmentation.3 Segmentation and line drawing are related but distinct: segmentation boundaries are an obvious source of lines but generally do not provide compelling line drawings on their own.

Two recent studies directly investigated drawings by human artists of 3D models. Isenberg et al.12 compared viewers' perceptions of hand-drawn versus computer-generated pen-and-ink illustrations. Phillips et al.19 conducted a study similar to ours, in which artists were asked to draw synthetic, blobby shapes from a range of prompt types. Among other differences from that work, our study includes a separate tracing and registration step that allows greater accuracy in the analysis of artists' lines.

Back to Top

2. Study Design

The study is designed to capture the relationships between the locations where human artists draw lines and the mathematical properties of the model's surface and appearance at those locations. To achieve this goal in a way that supports detailed analysis, several important choices must be made: what drawing style to consider; what models, views, and lighting conditions to use as prompts; how to present these prompts to the artists; what instructions to give the artists; and how to scan and process the drawings. The following sections describe each of our design decisions in detail.

* 2.1. Artistic style

The first challenge in designing the study is to decide on a style of drawing that is narrow enough that all artists have roughly similar intentions while drawing, yet flexible enough for each artist to exercise individual ingenuity. We balance these goals by focusing on line drawing that includes only feature lines, with no hatching or shading (examples appear in Figure 5). This choice of style was made for two reasons. First, it is a simple style that is familiar to most artists and yet expressive enough to depict shape. Second, it matches the style generated by several NPR rendering algorithms recently proposed in the CG literature7,13. By asking the artists to draw in the same style as the computer algorithms, we can learn both about the human drawings (by using the vocabulary of the algorithms) and the computer drawings (by using statistical correlations with human tendencies).

We give each artist verbal and written instructions to make drawings with "lines that convey the shape" of an object. We do not provide instructions about whether lines should represent shape features, lighting features, or anything else. However, we specifically ask the artists to refrain from including lines that represent area shading or tone features, such as stippling or hatching.

* 2.2. Prompt selection

A second design decision is to select 3D models and rendering parameters to use when producing prompts (images depicting a shape for the artists to draw).

Our first concern is to provide images from which the artists can easily infer shape but that are not so familiar that they apply domain-specific ("idiomatic") knowledge when drawing. This consideration not only rules out overly abstract or complicated 3D surfaces (i.e., shapes unlike anything in common experience) but also rules out objects with strong semantic features (e.g., human faces) and ones commonly drawn in art classes (e.g., fruit). It also suggests that multiple views of the shape be provided as prompts so that ambiguities in one view are resolved by another. Finally, prompt images should be photorealistic to avoid confusing artists that are not familiar with classic CG rendering artifacts such as hard shadows and lack of indirect illumination.

For productive analysis, the set of prompts should include pixels with a wide variety of mathematical properties (e.g., high image gradients, surface critical points, etc.), and these features should be separated spatially to allow clear distinctions to be made. This consideration rules out objects containing only large, planar facets (few interesting surface features), convex objects (no concave surface features), and other surfaces with few infections. Rather, it suggests blobby objects with many curved surfaces.

Finally, the objects must be relatively simple, without much fine scale detail. Otherwise, the artists may be tempted to abstract or simply omit important features.

Based on these criteria, we select 12 models of 4 object types for our study: (a) four bones, (b) two tablecloths, (c) four mechanical parts, and (d) two synthetic shapes (Figure 3). We synthesize four prompt images for each model, one for each combination of two different viewpoints and two lighting conditions. The two viewpoints are always 30° apart (so that large parts of each model can be seen from both viewpoints) and are carefully chosen to distribute surface features across the image. By providing prompts with different lighting and different viewpoints for the same model, we can analyze image-space properties in isolation from object-space ones.

We generate our images using YafRay,25 a free raytracing package capable of global illumination using Monte Carlo pathtracing. The models are rendered using a fully diffuse, gray material, and thus take on the color of the lighting environment. For lighting, we use the Eucalyptus Grove and Grace Cathedral high-dynamic-range environment maps captured by Debevec.6

* 2.3. Line drawing registration

The final and most difficult part of the study design is to engineer a system that is able to register line drawings made by artists to pixels of a prompt image with great accuracy.

Designing such a system is challenging because there is a trade-off between allowing the artist to draw in a natural manner (e.g., with pencil on a blank sheet of paper) versus including constraints that facilitate accurate registration between prompts and line drawings. On one hand, the drawing process surely must not bias the locations of lines made by the artist, and thus it is not a good idea to have the artist compose a drawing directly over the image prompt. On the other hand, the process must provide enough registration accuracy to distinguish between important mathematical properties at nearby pixels in the prompt. This problem is particularly difficult since freehand drawings can be geometrically imprecise, and the intended location of every line is only known by the artist.

Our design balances these trade-offs with a simple two-step process. The artist is given two sheets of 8.5" × 11" paper for each line drawing (Figure 4). The prompt page (shown on the left) contains multiple full color views of the prompt shape, one of which is large (6.5" × 4.75") and is called the main view. The drawing page (shown on the right) contains two boxes, each the same size as the main view. The top box is initially blank, while the bottom box contains a faint version of the main view.

The artist is asked to complete the drawing page by first folding the page vertically in half so that only the blank space at the top is visible (Figure 4, left). Using the viewing page for reference, the artist draws the prompt shape in the blank space, just as if they were making a normal sketch. When finished, the artist unfolds the drawing page and copies their freehand drawing onto the faint image on the bottom of the same page. During the copying step, the artist is asked to change the shape of their lines to match the target rendering but not to change the number or relative position of the lines. In effect, the artist is asked to perform a nonlinear warp of their original drawing onto the target shape. A typical result is shown on the right side of Figure 4.

We scan the drawing page with a flatbed scanner, locate fiducials included in the corners of the page, and then use the fiducials to register the traced lines with the 3D model rendered from the main viewpoint. An adaptive thresholding method is used to convert the scanned gray-scale image into a binary image so that all the artist's lines, regardless of strength, are included in the binary image. We then use a thinning operator to narrow the lines in the binary image down to the width of one pixel. The final result is a 1024 × 768 pixel binary image containing a single pixel wide approximation of the human artist's lines. While this procedure takes up to twice as long as a single drawing (e.g., it requires the artist to draw every line twice), it achieves a nice balance between the design trade-offs: the line drawings are composed in a freehand manner familiar to artists, while the intended locations of every line on the 3D surface can be inferred with great accuracy.

* 2.4. Data collection

This line drawing and registration procedure was repeated for 29 artists, most of them were enrolled in one of four art classes (two composed of middle and high school students, one of adult evening students, and another of college students). Two of the participants were professional artists. Each artist completed up to 12 prompts. Every participant completed a questionnaire listing his/her gender, age, and number of years of art training. In all, there were 22 females and 7 males. The ages ranged from 10 to 54 years, with an average of 22; and the participants reported an average of 6 years of art training (this number should be taken with a grain of salt, as some participants reported only training at the college level, while others reported all art classes).

Every artist was provided a folder with 1 page of instructions, 12 prompt pages, and 12 corresponding drawing pages (one for each model). The folders were arranged such that no artist could draw the same model more than once, and prompts for models, viewpoints, and lighting conditions were arranged in shuffled order to reduce effects of training on our analysis.

The artists were given brief verbal instructions ("draw lines that convey shape" and "be sure to copy every line from your freehand drawing over the faded image below") and then told to complete line drawings at their own pace for as long as they had time. Most of the art classes were scheduled for a 2-h block, and each line drawing took 10–15 min, on average (with time split around two-thirds for drawing freehand and one-third for tracing lines over the faded image). Each participant completed an average of 7.5 drawings—only one participant (a professional artist) completed all 12 available in his folder.

In all, 208 line-drawing images were collected. Generally speaking, the artists followed the directions well, produced line drawings that convey shape effectively, and were careful when tracing lines over the faded image (some example line drawings are shown in Figure 5). However, in some cases, the artists clearly were not careful in the registration step, failing to follow even the exterior outline of the shape. Since accurate registration of lines to image features is essential for meaningful results, we cull these tracings from our analysis. To do this in an unbiased way, we assume that inclusion of the exterior outline is common to all human line drawings and eliminate from our data set any drawings where less than 90% of the exterior is within 1 mm of a human-drawn line. The remaining 170 line drawings form the basis for our analysis.

Back to Top

3. Results

We can investigate a number of questions by comparing the captured line drawings and the synthetic images provided as prompts to the artists. We ask not only how artists' drawings overlap with one another but also how they overlap with lines generated by CG algorithms and how they can be predicted from local properties of the underlying surface and rendered image.

* 3.1. How similar are the artists' drawings?

The first and most basic analysis we perform is to measure the similarity between artists' drawings of the same prompts. We can show consistency between artists qualitatively by superposing drawings on top of each other and visualizing how much they overlap. For example, Figure 6a shows each artist's drawing in a separate color. In this example, the artists agree very closely with each other in most areas, especially along obvious features such as boundaries and occluding contours, but differ in exactly where they place lines in the right part of the rockerarm.

In order to quantify consistency, we compute a histogram of pairwise distances between artists' drawings (Figure 6b). For every pixel in every drawing, we record the distance to the closest pixel in every other drawing of the same prompt and then observe how often these distances lie within the tolerance of the tracing procedure (1 mm). Across all prompts, approximately 75% of human drawing pixels are within 1 mm of a drawn pixel in all other drawings for that prompt.

* 3.2. Do known CG lines describe artists' lines?

A natural question to ask is how well currently known line-drawing algorithms can describe the human artists' lines. In our analysis, we consider the following line-drawing algorithms: image intensity edges,2 geometric ridges and valleys (as defined by Ohtake et al.17), suggestive contours,7 and apparent ridges.13 For the object space methods (ridges and valleys, suggestive contours, and apparent ridges), we always include the exterior boundary and interior occluding contours in the generated drawing. For Canny edge detection, we always include the exterior but not the interior contours, since they are not necessarily image intensity edges.

Quantifying comparisons between drawings. In order to compare an artist's drawing and a computer-generated drawing quantitatively, we use the standard information retrieval statistics of precision and recall (PR). Here, precision is defined as the fraction of pixels in the CG drawing that are near any pixel of the human drawing. Recall is defined as the fraction of pixels in the human drawing that are near any line of the CG drawing. We define "near" by choosing a distance threshold—we use 1 mm.

As an example, consider comparing the set of five human drawings shown in Figure 6a with the lines generated by the apparent ridges algorithm (Figure 7). The output of the apparent ridges algorithm is not only a set of lines but also a "strength" value at each line point. In general, we expect stronger lines to be more important and thus more likely to match the artists' lines. We thus generate a series of binary apparent ridges images, each consisting of all points with strength above a given threshold. The PR of each drawing compared with this set of images is shown as a dotted pink line in Figure 7. As the strength threshold is lowered more lines are produced, typically causing recall to increase and precision to go down, yielding a sloping line in the PR graph. For completeness, we allow the PR plot to extend to P = 1.0, R = 0.0 (defined as a blank image) and directly downward to P = 0.0 from the highest recall obtained by the algorithm. Since each PR curve is defined for P = [0, 1], we can compute an average curve by combining points along lines of fixed precision. The PR values for occluding contours alone are plotted as black dots and are not averaged.

Comparing CG lines in combination. In order to combine line definitions fairly, we use computer-generated drawings with a fixed 80% precision. We then classify each pixel in each human drawing by the nearby CG lines. Pixels that lie near a single-line definition are considered to be explained only by that definition, while pixels that lie near multiple definitions are considered explained by all the nearby definitions.

To visualize the results, we create bar charts that partition the lines into object space definitions (blue), image intensity edges (green), or both (brown). Looking at the results in Figure 8a, we find that the large majority of lines are described by both image intensity edges (Canny edges) and an object space definition. Of the remainder, slightly more lines are explained by the combined object space approaches than by image edges alone.

Lines that are explained only by image edges account for at most 5% of all classified lines at 80% precision. We can also break down the human lines by intuitive categories, such as exterior and interior occluding contours and everything else (Figure 8b). Across all model groups, exterior contours alone account for between 35% and 50% of all classified pixels. Interior occluding contours account for between 10% and 20% of all classified pixels, while all other definitions make up 20%–35%.

Can CG lines characterize artists' tendencies? Given a way of describing an artist's drawing in terms of CG line types, it is possible to investigate whether those descriptions can characterize the similarities and difference between artists' styles or tendencies. For example, it may be possible to characterize whether certain artists tend to draw certain geometric features (e.g., ridges) more than other artists do. In such cases, the CG line definitions provide a vocabulary to discuss features of human line drawings.

Figure 9 shows a simple example of this type. Two drawings of the same prompt (twoboxcloth with Grace Cathedral lighting) are compared by the composition of CG line types. The colored bars indicate the fraction of the drawing made up by each line type. In this case, however, each set of bars represents a single drawing. One immediate difference between the drawings is that artist A drew more lines besides the contours. Non-contour lines account for 26% of artist A's drawing and only 13% of artist B's drawing. The bulk of the difference between the artists is in the use of ridge-like lines (green, yellow, and pink bars). Artist A drew ridge-like lines along the top of the shape, while artist B did not. This visual difference is evident from the statistics, which show a large fraction of geometric ridges and apparent ridges in artist A's drawing and almost none in artist B's drawing.

* 3.3. Can local properties explain lines?

While it is interesting to investigate the relationship between artists' lines and the lines commonly used in CG, a more fundamental question is how artists' lines relate to differential properties of images and surfaces. The analysis above addresses this question indirectly, since each CG definition is based on a set of local properties, but it is restricted to the relationships suggested by the known line-drawing algorithms. To address this question, we take a classic data mining approach. For every pixel of every prompt, we compute: (1) a feature vector x of properties derived from the 3D surface and 2D rendered image and (2) an estimated probability that a line will be included at the corresponding location in an artist's line drawing. Our goals are to learn a function f(x) that estimates the probability p of an artist drawing at a point (regression) and to understand which combinations of properties are most useful for building such a function (feature importance).

Choosing local properties. To build the feature vector for each pixel, we compute 15 local properties of three types commonly used in image processing, CG, and differential geometry. First, we consider four image-space properties of the rendered image prompt: luminance, gradient magnitude after σ = 2 pixels Gaussian blur (ImgGradMag), and minimum and maximum eigenvalues of the image Hessian (corresponding to extrema in second derivative of luminance (ImgMinCurv and ImgMaxCurv, respectively).

In general, we expect that lines are more likely near image edges (ImgGradMag is large) and at ridges and valleys of luminance (where ImgMinCurv and ImgMaxCurve are large).

Second, we consider view-independent, differential properties of the visible point on the 3D surface, including the maximum (k1), minimum (k2), mean ((k1 + k2)/2), and Gaussian (k1k2) curvatures (SurfMaxCurv, SurfMinCurv, SurfMeanCurv, and SurfGaussianCurv, respectively). In most cases, we expect lines to occur in areas where these expressions are large, though it has also been observed that lines are drawn near parabolic lines (k1k2 = 0).

Third, we consider view-dependent properties that correspond to specific definitions for computer-generated lines. Corresponding to the definition of ridges and valleys, we take the derivative of the largest principal curvature in the corresponding principal direction (SurfMaxCurvDeriv), which is zero at ridges and valleys. Corresponding to occluding contours, we compute the dot product between normal and view vectors (N · V). Corresponding to apparent ridges and valleys, we compute the largest view-dependent principal curvature (ViewDepCurv) and its derivative in the corresponding apparent principal direction (ViewDepCurvDeriv), which are large and zero, respectively, at apparent ridges. Corresponding to suggestive contours, we compute the radial curvature (RadialCurv) and its derivative in the radial direction (RadialCurvDeriv), which are zero and large, respectively, at suggestive contours. Finally, corresponding to principal highlights, we compute radial torsion, which is zero at principal highlights.

Finally, we estimate the probability, p, of an artist drawing at a pixel by averaging the registered drawings of all artists for the same prompt and blurring with a Gaussian filter to account for tracing errors (σ = 0.5 mm).

Predicting lines by regression. Several of the computed properties clearly can be used to distinguish pixels where artists draw from where they do not (Figure 10). However, an interesting question is whether combinations of those properties can be used to predict where artists will draw more accurately than any of them alone. To investigate this question, we have experimented with several regression models, including linear regression, radial basis functions, regression trees, and several others. As an example, Figure 11a shows a regression tree built with the M5P package in Weka24 to predict the set of line drawings for one view of the twoboxcloth model shown in Figure 3. Figure 11a shows the prediction of p resulting from this simple tree, while Figure 11b provides a visualization of which pixels sort into which leaves of the tree (pixels in the image are colored to match the text of the leaf).

In this example, several properties are combined by the decision tree to predict p, starting with ImgGradMag at the root. The set of properties chosen is instructive, as it suggests that they provide the highest incremental value in predicting p (at the start of tree building). Of course, many properties are correlated, and the decision tree may be nonoptimal, so an alternative tree may have produced similar or better predictions. Nevertheless, it is interesting to see how non-trivial combinations of local properties can be used to make predictions—even though the tree was purposely kept small in this example, it still is able to provide a plausible (albeit coarse) prediction for where artists draw lines (Figure 11a). If we consider deeper trees or other regression models, we are able to predict p from x more accurately.

* 3.4. Which local properties are most important?

In our data mining framework, it is not only possible to predict where artists will draw but also to examine which local features are most important when building such a regression model. For example, Random Forests1 estimate the importance of every feature to its model by building a large number of decision trees trained on different subsets of the data.1 For each feature m of each built tree, the error observed in predictions for the "out of bag" data (the part held out of training) is computed and compared to the error that is observed when values of feature m are permuted. The difference between these errors, averaged and normalized, is reported as the "importance" of feature m. For this analysis, we make the assumption that almost all occluding contours (N · V = 0) are drawn by artists (Figure 10), and so exclude any pixel within 1 mm of a contour from the training set.

Table 1 shows the relative feature importance as computed with the Random Forest implementation of Breiman and Cutler in R for the remaining pixels of all drawings in our study.20 The first four columns report the importance of features (rows) estimated when training on models from each of one type (bones, cloth, mechanical, and synthetic), while the rightmost column reports the average over the whole data set.

The results indicate that image-space intensity gradient magnitude is the feature among the tested set that is most useful in predicting the probability that an artist will draw at a particular location in our study (e.g., the average prediction error is largest if values of the image-space gradient magnitude are randomized). While image-space discontinuities often appear at the same place as boundary contours and occluding contours (N · V = 0), the locations where those contours appear have been excluded from this study. So, this result suggests that image-space intensity gradients away from the contours are also highly correlated with artist line locations. Of course, this is not surprising, as ridges, valleys, and shadow boundaries are commonly drawn by artists. However, it is a bit surprising how all the simple image-space features (which do not require a 3D model to compute) are so important relative to the other more complex properties that have been the focus of recent research in CG.

* 3.5. Which CG lines are most important?

We use Random Forests to compute importance of the CG line definitions studied in Section 3.2 for predicting where artists draw lines. For this analysis, we compute a new feature vector for every pixel storing the strength for every CG line definition. Note that strength is only defined at pixels where the algorithm would draw a line (e.g., zeros of maximum curvature derivative for ridges). At all other pixels, strength is always zero. We then recompute the Random Forests with the new feature vectors.

Table 2 shows that strong image-space gradients in illumination (Canny edges) still provide the strongest cues for artists to draw lines, even in relation to other CG line definitions.

Back to Top

4. Conclusion

This study allows us to make several quantitative conclusions about how people draw 3D shapes. First, we observe that artists are consistent. In our study, artists drew nearly 75% of their lines at a location that is within 1 mm of all other artists drawing from the same prompt. These overlaps appear mainly at occluding contours, which comprise 57% of all lines drawn. Among the other lines, large gradients in image intensity (as measured by image-space gradient magnitude) provide the best single predictor for where artists will draw under the conditions of our study. Lines generated by Canny edge detection on a prompt image cover 76% of artists' lines with 80% precision. These lines are almost entirely overlapped (95%) by lines predicted by object-space line definitions commonly found in CG. The three object space definitions together cover 81% of the artists' lines at the same precision. We find that each of the four CG line definitions explains some artists' lines that the others do not.

The cumulative output of the four line-drawing algorithms considered in this paper cover only 86% of artists' lines at 80% precision. We believe that some of the remaining lines could be explained by new line definitions based on local properties and that clever combinations of properties could improve precision. Our data-mining analysis provides one way to find new definitions. We made an initial investigation of data-driven line drawing in our earlier article.4

We suspect, however, that a more fruitful direction for future research will be to investigate the global decisions artists make, rather than the local ones. For example, Figure 12 shows a case where artists chose to draw lines on locally weak ridge and valley features, while omitting lines along locally stronger ridges and valleys. This choice is consistent between several artists. When creating the flange drawing, several artists mentioned that they omitted lines that were "implied." Since implied features depend on context, they are not describable with local properties alone. Our earlier article and associated data seta show more examples of these effects.

Other directions. This study suggests several directions for further work, some of which we have already begun to investigate.

One question is the perceptual effectiveness of sparse line drawings. We placed heavy restrictions (feature lines only, no hatching or shading) on the methods the artists were allowed to use, and some artists complained that their resulting drawings did not effectively represent the subject shape. To investigate this issue, we performed a perceptual study of how well sparse line drawings depict 3D shape, using the same set of models and subset of the drawings we collected here.5 The study subjects were asked to estimate the surface normal at many points on the drawings. We found that people interpret certain shapes almost as well from a line drawing as from a shaded image and that there was little difference between the best CG line-drawing techniques and the artists' drawings. We confirmed, however, that some shapes (e.g., the cervical and vertebra models) are difficult to effectively depict with sparse lines, even for a skilled artist.

The results of the perceptual study provide additional motivation for expanding the range of styles we study to include shading or even artistic stylization. It would be particularly interesting to understand the middle ground between a black-and-white line drawing and a fully realistic shaded image. Perhaps there is a slightly enriched visual style that can depict even difficult shapes effectively, without requiring the full, photorealistic representation. The methodology of collecting artists' work and applying perceptual evaluation should provide the necessary tools to investigate these questions. We believe future results in this area will be of interest not only to CG researchers but also to researchers studying machine and biological vision.

Back to Top

Acknowledgments

First and foremost, we would like to thank the artists that participated in our study, including those from Art Collaborations and Princeton University. Many people provided helpful suggestions, feedback, and encouragement for this project, particularly Doug DeCarlo, Maneesh Agrawala, Fredo Durand, Eitan Grinspun, and the members of the Princeton Graphics Group. We also thank the NSF (CNS-0406415, IIS-0612231, CCF-0702672, CCF-0702672, IIS-0511965, and CCF-0347427) and Google for funding to support this project and the Aim@Shape, VAKHUN, and Cyberware repositories for the models used in the study.

Back to Top

References

1. Breiman, L. Random forests. Mach. Learn. 45, 1 (2001), 5–32.

2. Canny, J. A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell. 8, 6 (1986), 679–698.

3. Chen, X., Golovinskiy, A., Funkhouser, T. A benchmark for 3d mesh segmentation. In ACM SIGGRAPH 2009 Papers (New York, 2009), 1–12.

4. Cole, F., Golovinskiy, A., Limpaecher, A., Barros, H. S., Finkelstein, A., Funkhouser, T., Rusinkiewicz, S. Where do people draw lines? ACM Trans. Graph. (Proceedings of SIGGRAPH) 27, 3 (Aug. 2008).

5. Cole, F., Sanik, K., DeCarlo, D., Finkelstein, A., Funkhouser, T., Rusinkiewicz, S., Singh, M. How well do line drawings depict shape? ACM Trans. Graph. (Proceedings of SIGGRAPH) 28 (Aug. 2009).

6. Debevec, P.. Rendering synthetic objects into real scenes. In Proceedings of SIGGRAPH 1998 (Orlando, FL, 1998), 189–198.

7. DeCarlo, D., Finkelstein, A., Rusinkiewicz, S., Santella, A. Suggestive contours for conveying shape. ACM Trans. Graph. 22, 3 (2003), 848–855.

8. DeCarlo, D., Rusinkiewicz, S. Highlight lines for conveying shape. In Proceedings of NPAR 2007 (Aug. 2007).

9. Guptill, A.L. Rendering in Pen and Ink. Watson-Guptill Publications, New York, 1976.

10. Hertzmann, A., Zorin, D. Illustrating smooth surfaces. In Proceedings of SIGGRAPH 2000 (New Orleans, LA, 2000), 517–526.

11. Hilbert, D., Cohnvossen, S. Geometry and the Imagination. American Mathematical Society, 1999.

12. Isenberg, T., Neumann, P., Carpendale, S., Sousa, M.C., Jorge, J.A. Non-photorealistic rendering in context: an observational study. In Proceedings of NPAR 2006 (2006), 115–126.

13. Judd, T., Durand, F., Adelson, E.H.. Apparent ridges for line drawing. ACM Trans. Graph. 26, 3 (2007), 19.

14. Lee, Y., Markosian, L., Lee, S., Hughes, J.F. Line drawings via abstracted shading. ACM Trans. Graph. 26, 3 (2007), 18.

15. Martin, D., Fowlkes, C., Tal, D., Malik, J. A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In Proceedings of the Eighth IEEE International Conference on Computer Vision, 2001. ICCV 2001 (2001), 416–423, vol. 2.

16. Meyer, S.E., Avillez, M. How to Draw in Pen and Ink. Roundtable Press, New York, 1985.

17. Ohtake, Y., Belyaev, A., Seidel, H.-P.. Ridge-valley lines on meshes via implicit surface fitting. ACM Trans. Graph. 23, 3 (2004).

18. Peck, S.R. Atlas of Human Anatomy for the Artist. Oxford University Press, London, U.K., 1982.

19. Phillips, F., Casella, M.W., Gaudino, B.M. What can drawing tell us about our mental representation of shape? J. Vis. 9, 5 (Sept. 23, 2005), Article 52.

20. R. R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria, 2005.

21. Ruskin, J.. The Elements of Drawing. Elibron Classics, 1895.

22. Russell, B., Torralba, A., Murphy, K., Freeman, W.. Labelme: a database and web-based tool for image annotation. Int. J. Comp. Vis. 77, 1 (2008), 157–173.

23. Smith, S. Looking at line. In Complete Guide to Drawing and Painting. R. D. editors, ed., Readers Digest, 1997, 13.

24. Witten, I.H., Frank, E. Data Mining: Practical Machine Learning Tools and Techniques, 2nd edition. 2005.

25. YafRay. YafRay 0.0.9: Yet Another Free Raytracer. www.yafray.org, 2008.

Back to Top

Authors

Forrester Cole, Aleksey Golovinskiy, Alex Limpaecher, Heather Stoddart Barros, Adam Finkelstein, Thomas Funkhouser, and Szymon Rusinkiewicz, Princeton University, Princeton, NJ.

Back to Top

Footnotes

a. http://www.cs.princeton.edu/gfx/proj/LD3D/

A previous version of this paper was published in the ACM Transactions on Graphics 27, 3; Proceedings of SIGGRAPH 2008, (Aug. 2008).

Back to Top

Figures

F1Figure 1. Where people draw lines. Average images composed of 107 drawings show where artists commonly drew lines in our study.

F2Figure 2. CG line drawings. Recent research has proposed several ways to create a line drawing from a 3D model (a). We examined smooth silhouettes (b), suggestive contours (c), geometric ridges and valleys (d), apparent ridges (e), and image edges (f).

F3Figure 3. 3D models. The 12 models from our study, each shown in one of two views and one of two lighting conditions. Groups (a and b) are scans of real objects; (c and d) are synthetic.

F4Figure 4. Making a drawing. With the drawing page folded in half, the artist makes a freehand drawing while referring to the prompt page (left). The completed drawing page (right) contains a freehand drawing and a registered drawing.

F5Figure 5. Example drawings. Three drawings of the screwdriver model from the same view (a–c), and the average of 14 drawings of the same view (d).

F6Figure 6. Consistency of artists' lines. (a) Five superimposed drawings by different artists, each in a different color, showing that artists' lines tend to lie near each other. (b) A histogram of pairwise closest distances between pixels for all 48 prompts. approximately 75% of the distances are less than 1 mm.

F7Figure 7. Precision and recall example. Left: apparent ridges are compared with five artist drawings. Solid line (highlighted) is the average PR for the set of drawings. Black dots indicate occluding contours only. Right: an example drawing (black) with apparent ridges overlayed (red, widened by 1 mm on each side). The PR for this example is circled (80% P, 88% R).

F8Figure 8. Categorizing artists' lines. (a) Fraction of all lines explained by image based lines only, object based lines only, and both. (b) Fraction of all lines explained by the exterior contours, interior occluding contours, and all other object space lines.

F9Figure 9. Comparison of two drawings by different artists. Two drawings of the same prompt show significant visual differences. These differences are reflected in the statistics, especially in the use of ridge-like lines (green). RV: ridges and valleys, AR: apparent ridges, SC: suggestive contours (see

F10Figure 10. Example local surface features. Top: the frequencies of pixels near artists' lines (blue), and away from artists' lines (green, dashed), as functions of local surface properties. Bottom: pixels near artists' lines as a fraction of the total. Pixels where N · V ≈ 0 or where the sobel response is high are very likely near a line.

F11Figure 11. Decision tree for predicting where artists will draw. (Left) decision tree learned from prompts of bones, (a) predicted probabilities of where artists will draw for this view (black is high probability), (b) a visualization of which pixels fall into which leaves of the tree. Note that this tree was purposely kept small for didactic purposes, yielding coarse prediction.

F12Figure 12. Subtle line selection decisions by artists. The red lines (solid boxes) in this composite (a) are unexplained at 80% precision, but can be characterized as geometric valleys. however, the artists have omitted locally stronger valley features (dotted boxes), as shown by the maximum curvature of the model (b).

Back to Top

Tables

T1Table 1. Local property "importance."

T2Table 2. CG line definition "importance."

Back to top


©2012 ACM  0001-0782/12/0100  $10.00

Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and full citation on the first page. Copyright for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or fee. Request permission to publish from [email protected] or fax (212) 869-0481.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2012 ACM, Inc.