Research on rapid generation of 3D models based on art and design cognitive models

. Human-computer interaction interfaces based on hand-drawing are currently being used more and more widely. Hand-drawing is a natural and easy way for designers to do this. Most interfaces currently provide designers with only one smoothly drawn stroke, rather than the multiple strokes they are more accustomed to using. This paper is based on an algorithm for hand-drawn interfaces, which analyses multiple strokes and replaces them with a single stroke in order to rationalise the designer's creative intent. The space is subdivided recursively until only one stroke remains or a suitable order is achieved using principal component analysis, then the subdivided space is reconnected and a large list of points is produced; finally, because the curve is very noisy, the algorithm uses reverse subdivision to find control points to fit a smooth B spline curve.


Introduction
The art designer's perception of the extremely contained three-dimensional objects in three-dimensional space is primarily developed through light and shadow sketching training in early art education. This mode of artistic cognition is not only utilised by art designers but is also inherent in the analytical mode of ordinary human vision. This two-dimensional to three-dimensional cognitive approach, which can be referred to as the art design cognitive mode, is the basis on which the art designer can represent the three-dimensional prototypes created by his or her brain in hand-drawn drafts.
2 Algorithm design for constructing a single curve with multiple strokes

Sampling of user input curves
The designer freely inputs a number of strokes in different directions and orders to draw the curve, and the system eventually outputs a B-sample curve. While it is possible to obtain a cloud of points for all strokes, using the local order of the points for each stroke gives a better approximation of the input strokes and a better representation of the designer's intent.

Algorithm flow
The designer draws a set of curves by hand and the set of strokes that make up the curve is noted as   The algorithm eventually finds a B-sample curve that is generally close to these strokes. Each stroke i S consists of a sequential set of points, the order of which is determined by the input device. All strokes are contained in a single box, and then smaller boxes are subdivided recursively until each box meets a set of conditions simple enough for subsequent processing; then, the data within each box is processed to produce a locally ordered list of points; next, all small boxes are connected, and the local points of each box are simply concatenated to construct a huge list of points list; finally, the overall fit is made with a single B spline curve. Figure 1 shows the whole process in detail [1] . . Each stroke i S consists of a sequential set of dots, the order of which is determined by the input device. This chapter considers the use of the local order of i S , i.e. the input order of the dots for each stroke [2] .
Ordering points based on their x or y coordinates would result in an x or y many-to-one conflict. The algorithm in this chapter converts the coordinates of these points into a new coordinate system using a linear transformation, the first coordinate of which shows the direction of maximum change of the point. A PCA (Principal Component Analysis) technique is used to implement this transformation. Construct PC for a local neighbourhood of P . Then the covariance matrix can be defined as

Breakdown box
The breakdown method depends on the trend of the data in the box. If the data is largely horizontal, it is broken down into horizontal halves. If the data is generally vertical, it is divided into vertical halves. [4] If there is no particular trend in the data, it is divided into four equal parts [5] . Very straight strokes and strokes with many points have a greater influence on the choice of subdivision method. Allowing these primary strokes to determine the subdivision method, they determine whether the strokes in the box are as simple as required. Weight the average over the angle between the x-axis and the primary eigenvector of each stroke. Each value is weighted by the ratio of the number of dots to 2 1 /   . Weighting with the number of points ensures that short, divergent strokes have the least impact on the curve; weighting with the ratio of 2 1 /   is to ensure that the straighter the stroke the greater the impact on the choice of subdivision method. The average of the 2 1 /   ratios for all strokes in the box is less than 50 or weighted between 30 and 60 degrees, then the subdivision is divided into four equal parts, with greater than 60 degrees divided vertically into two equal parts and less than 30 degrees divided horizontally into two equal parts. Finally, for very narrow boxes, subdividing by the data in the box sometimes creates even narrower boxes, resulting in an infinite loop where the boxes cannot be joined together. Therefore, it is agreed that if a box has an aspect ratio greater than or equal to 7, the method of subdivision that reduces this aspect ratio is chosen, regardless of the data in the box [6] . Let

Conditions for ending the subdivision
A box contains many strokes i S and PCA provides a measure of how close the data in the box is to a straight line (as determined by the ratio of 2 1 /   ) and how close each stroke is to the overall trend of the box (as determined by the angle between the principal vector of any one stroke and the main vector of the box). The principal vector of the box i b is calculated first, followed by the principal vector of each stroke . i S Boxes containing only one stroke, it does not need to be subdivided, as it already has a proper order. Strokes that contain too few points are ignored because strokes that are too short have no effect on the structure of the whole curve. Conversely, if the angle between the principal vector of any stroke and the principal vector of the box is greater than 40 degrees, or if the ratio of a stroke 2 1 /   is less than 10, it is not considered simple [7] .

Processing of subdivided small boxes
First you need to determine all the areas that overlap. Find the start and end points of each stroke in the box and sort them according to the order in which they are projected onto the main vector of the box. The order of these 'event points' on the order list is noted as i P . The list is then processed from left to right and a dynamic list of strokes is maintained by adding a stroke when a start point is encountered and removing a stroke when an end point is encountered, adding or removing a stroke from the dynamic list whenever i P is encountered. [7] For simplicity, consider the case where there are only two overlapping strokes, and draw two B-sample curves for the points leading to the overlapping strokes. These are denoted T(u) and B(u). t and b are the values of the number of control points on the curve, and m is the larger of t and b. Sample m points using a uniform distribution. i control point of the B spline curve is taken along the following formula: Replace all points between 1  i P and i P with these m points in local order. Use a weighted average of the changes in the two B-sample curves to smoothly transition from the first curve to the second curve.

Connect all small boxes
To join the boxes in the box collection select any box b to add to the empty list L. Find the last point in the local order of b, determine which global stroke the point belongs to, and find the next point along the order of this stroke. If this point is in a new box, this is the next box in L and is added to the end [8] . Repeat until no new box is found, then re-detect box b and follow a similar approach, finding the first point along the global order, reverse by stroke. Each new box found is added to the head of L. When the execution along this path ends, all the boxes are connected in the order of L.
Scanning from the top right corner to the bottom left corner, the minimum grey value between the adjacent top point and the adjacent right point plus 1 is the pixel point grey value RT For each pixel in the image, compare the four scans in the resulting array and calculate the minimum value of the corresponding position gray value as the corresponding position weight, and combine the weights together as the distance field, i.e.

User-drawn ridges
The curves that are not enclosed are first closed by joining the beginning and end. Then, using the polygon envelope algorithm, we calculate the area enclosed by the curve, setting the value of the ridge distance field to 0 for the entire area, and the value of the ridge field for the area outside the ridge area, since the entire enclosed area is treated as a ridge area, is no longer calculated according to the distance to the ridge algorithm, but according to the distance from each point to the ridge area. With the distance field and the ridge field, two important features, we need to combine the information from the ridge field to the distance [9] .
Going away from the field will represent the combined distance information of a coordinate point by a distance field corrected by the ridge line distance [10] . That is, not only must the distance field data be large, but also close to the ridge line (small ridge line field) to be the point with the largest new distance field, which is the final ridge line point. The original contour boundary point, meanwhile, remains the boundary point [11] .
Combining the ridge distance field information into the original distance field information requires a mathematical averaging method, an algorithm that is able to take full advantage of the weighting and influence of the original information, while at the same time taking into account that the results of the actual calculation are in line with our user's conventions. By comparing the results of what different weighting algorithms were used, the final algorithm was adopted as follows: field. The value of k indicates the weight of the ridge distance field in the algorithm, starting from 0. The larger the value of k, the greater the influence of the distance field. k has a great influence on the final result, if the value is too small, the ridge will not have any influence on the original distance field, if the value is too large, the changes around the ridge will be too sharp and the graph will not be smooth enough. After a number of trials, we set k to a value of 10, which gave us a more satisfactory result.

Linear approximation of shape recovery from light and dark
First the eye sees the light and shadow information reflected from the surface of the object, which is then transmitted to the human brain, so the most natural and intuitive and scientific method is to use the light and dark information from the distance field for 3D object generation. Using the changes in light and dark on the surface of an object in a single image to recover parameter values such as the normal direction of the surface or the relative height of the points on the surface, and then using the height values to reconstruct the object in three dimensions, is exactly the technique used for shape recovery from light and dark (SFS) [12] .
By simplifying the calculation, the higher order nonlinear terms are discarded in the Taylor expansion, as the literature suggests that the main part of the composition of the reflection function is in the lower order terms. Assuming that the reflectance is constant and the reflection model is a Lambertian surface model, the equation can be written as     In the formula, E(x, y) denotes the image brightness, R(p, q) denotes the reflection function and the surface gradient is defined as  