GENETIC ALGORITHMS IN MECHANISM SYNTHESIS

Arun Kunjur

Graduate Research Assistant

Sundar Krishnamurty

Associate Professor

Department of Mechanical Engineering

University of Massachusetts

Amherst, MA 01003

ABSTRACT

This paper presents a genetic algorithm based technique for mechanism dimensional synthesis. Genetic algorithm, or any evolutionary method, differs from classical optimization methods in that there is a non-zero 'probability' of attaining the global optimum. Alternatively in classical deterministic models, the end result is a function of the starting point, and these models usually end up in a local minimum. Another disadvantage of deterministic methods is the computational complexity involved in the calculation of derivatives and hessians. Genetic algorithms, on the other hand, are simple to implement and involve evaluations of only the objective function and the use of certain genetic operators to explore the design space. Moreover, a population of optimum points is obtained that will allow the designer to select a design that satisfies all subjective constraints as well. These characteristics make this approach well suited for finding optimal solutions to the highly non-linear mechanism synthesis problems. This paper presents the development of such an approach including treatment of constraints. A salient feature of this modified genetic algorithm technique is its use of a modified crossover operator and a linear normalization based selection procedure that prevents premature convergence of the population. The application of this technique is presented using illustrative examples, and the results are discussed with respect to robustness and efficiency.

INTRODUCTION

Mechanism dimensional synthesis typically involves path, motion or function generation. Many graphical and analytical techniques were initially used for dimensional synthesis (Freudenstein, 1955; Erdman and Sandor, 1991; Suh and Radcliffe, 1978). The use of graphical methods is limited to simple path generation problems. Analytical methods require algebraic expressions and can become cumbersome for complex problems. Some of the shortcomings of graphical and analytical methods are overcome by numerical optimization methods which may be either direct search or gradient based methods. In direct search methods (Nelder and Mead, 1965; Hooke and Jeeves, 1961), a small increment is given to an initial feasible design and different increments are tried until a better solution is obtained. Gradient based optimization methods require gradient information which is generally obtained using finite difference approximation techniques. Gradient projection method (Rosen, 1960 & 1961), reduced gradient method (Wolfe, 1963) and the generalized reduced gradient algorithm (Abadie et.al., 1969) are some examples of gradient based methods. These and other similar numerical optimization methods have been found to give reasonably accurate results for simple problems. In the case of highly nonlinear mechanism optimization problems, the number of iterations required to find the optimum step size along the descent direction can be quite large, thus consuming significant computation time. In particular, application of these methods are often characterized by excessive function evaluations, premature termination of the algorithm and inaccurate solutions (Mariappan and Krishnamurty, 1992). The exact gradient method solves these problems by determining the exact gradients of the objective and constraint functions instead of using finite difference approximations (Mariappan and Krishnamurty, 1995). However, it is still a gradient based search method and as such it requires derivative knowledge. Furthermore, all the optimization techniques described so far aim for and result in a single optimal point though there may exist other similar or better designs in the feasible design domain.

An alternative approach for dimensional synthesis is to use probabilistic methods as opposed to the deterministic methods discussed above. These methods are called 'hill climbing' methods as they accept a relatively bad result with a given probability. Thus these algorithms are capable of breaking out of the local minima and finally converging to the absolute optimum point. Simulated annealing (Kirkpatrick et.al., 1983) and genetic algorithms (Goldberg, 1989) are some examples of hill climbing algorithms. Simulated annealing, like the numerical optimization methods, gives only a single optimum design point. Instead of a single design, if a set of non-inferior designs is presented, the designer has the option of selecting the one that best suits the problem. Genetic algorithms search from a population and offer a set of equally good solutions to the designer. Unlike the gradient methods, no gradient or hessian information is necessary, which makes the implementation of genetic algorithms even more appealing from the point of view of user effort. This paper discusses a genetic algorithm based approach to optimize the coupler path of a four bar mechanism and illustrates its application with examples.

A major drawback in applying genetic algorithms in its 'as is' form to complex nonlinear constrained optimization problems such as mechanism synthesis is the compuational burden involved. In a recent paper (Fang, 1994), applying genetic algorithms to mechanism synthesis, it was stated that a genetic algorithm took a few hours to converge to an acceptable result. Conventionally, genetic algorithms operate on binary strings, whose processing is time consuming and computationally expensive. If the genetic algorithm can be designed to operate on real numbers instead of binary strings, it will be more efficient in terms of time and computation. This paper presents a genetic algorithm formulation using real number representation and incorporating a guided genetic operator to investigate the design space for four bar path generation problems with prescribed timing.

BASIC GENETIC ALGORITHM

The genetic algorithm (Holland, 1975) is a probabilistic technique that uses a population of designs rather than a single design at a time. It is analogous to natural selection in the evolution of living organisms in that the fittest members in the population have a better chance to survive, reproduce and thus transfer their genetic material to the successive generations. The initial population is formed by a set of randomly generated members. Each generation consists of members whose constituents are the individual design variables that characterize a design and these are embedded in a binary string. Each member is evaluated using the objective function and is assigned a fitness value, which is an indication of the performance of the member relative to the other members in the population. A biased selection depending on the fitness value, decides which members are to be used for producing the next generation. The selected strings are the parents for the next generation, which evolves from the use of two genetic operators namely crossover and mutation. These operators give a random displacement to the parent population and generate a new population of designs. The crossover operator takes two parent strings, splits them at a random location and swaps the sub-strings so formed. A probability of crossover determines whether a crossover should be performed. The mutation operator inverts a bit in the string depending on the probability of mutation. The new strings formed are evaluated and the iteration continues until a maximum number of generations has been reached or until a user defined termination criterion has been met. Figure 1 shows the sequence of steps in a basic genetic algorithm. The control parameters that have to be initially specified are the population size, the crossover and mutation probabilities, the maximum number of generations and the termination criterion.

There are many alterations that may have to be introduced into the basic genetic algorithm described above, depending on the problem (Davis, 1991). For example, the whole population can be used for reproduction (generational replacement), only a part of the population can undergo reproduction (steady state replacement) or the best member in the population can be passed on to the next generation without any changes (elitist selection). The crossover operation can occur at a single point or at more than one point (multi point crossover). The fitness function may be based on the objective function value or on the position of the member in the population (linear normalization). The control parameters of the genetic algorithm may be fixed at a particular value or can be made to vary as the genetic algorithm progresses. Real numbers may be used for representing the design variables or they may be mapped to a binary string. There is no single variation that out-performs the others for all types of problems and therefore the designer has to decide as to which variations to implement.





FIGURE 1. GENETIC ALGORITHM PROCESS.

MECHANISM DIMENSIONAL SYNTHESIS

Typically, mechanism dimensional synthesis involves the minimization of structural error subject to a set of size and geometric constraints such as Grashof and crank rocker conditions. For example, in path generation problems, the coupler point is required to trace a path with minimum error relative to a given curve, specified by a set of points. The objective function is a measure of the error between the path obtained and the desired path. This error is usually expressed as the sum of the squares of the error at each point in the path. The objective function is evaluated by determining the coordinates of the precision point at various positions of the mechanism for a particular design and this is accomplished using basic geometric and trigonometric relations. The optimization problem for a fourbar mechanism is stated as follows (Figure 2) :

Min F = -------(1)

i=position p=no. of points

Subject to

Grashof Condition - xL + xS xP + xQ

Crank Rocker - x1 < x0 , x2 , x3

Variable Bounds - xminj xj xmaxj

where Pxi* , Pyi* are the x and y coordinates of the obtained points, Pxis , Pyis are the x and y coordinates of the desired points, xj's are the link lengths, xL is the longest link length, xS is the shortest link length, xP and xQ are the lengths of the other two links of the four bar. The Grashof condition checks assemblability at all positions. The crank rocker constraint ensures that the crank is the smallest link in the fourbar and hence can be rotated by external means.

FIGURE 2. TYPICAL FOUR BAR MECHANISM

GENETIC ALGORITHM IN MECHANISM SYNTHESIS

The genetic algorithm procedure described cannot be applied directly to mechanism synthesis, which is a highly nonlinear constrained optimization problem. Certain modifications are necessary in the basic genetic algorithm for the treatment of constraints and to avoid premature convergence of the solution. Towards this end, this work presents a genetic algorithm formulation to treat constraints and to guide the algorithm, to some extent, towards the global optimum.

The initial population for the genetic algorithm is randomly generated such that all constraints are satisfied. The design variables associated with each of the designs can be represented either as binary strings or as real numbers. The advantage of the binary representation is that conventional crossover and mutation operations become very simple. The disadvantage is that the binary numbers have to be converted to real numbers when the design is to be evaluated. Moreover, binary operations consume a lot of computer memory as well as computation time. On the other hand if crossover and mutation operators can be formulated to act on real numbers, the binary representation can be eliminated. In this research, real number representation has been used. The modification in crossover and mutation operators is described at the end of this section.

Each of the randomly generated designs is evaluated by the objective function and the constraints are checked separately. Since mechanism synthesis problems are minimization problems, the objective function value cannot be used as a measure of fitness of the design. These have to be transformed so that a better design will have a higher fitness value. This can be done in two ways (Davis, 1991). First, fitness can be determined by scaling the objective function value and assigning an inversely proportional number. This is a simple method but may lead to premature convergence when a single member dominates the population. The second method is to rank each design in the population in the ascending order of the objective function value and assign a fitness value proportional to the rank. This ensures that designs which are neither bad nor very good remain in the competition to reproduce thus preventing the domination of the population by a single relatively exceptional design.

In constrained optimization, the design space gets restricted and therefore, genetic algorithms cannot be directly applied. There are mainly three ways in which a genetic algorithm can be formulated to treat constraints (Davis, 1991). First, we can allow infeasible designs after penalizing them with a low fitness value, thus giving these members a low probability of selection for the next generation and ensuring that only the good strings are passed on to the next generation. Second, we can continue with the crossover or mutation operation until a feasible design is generated. This is not practical as any number of crossovers or mutations may result in an infeasible design and the algorithm may end up in an infinite loop. A third way of treating constraints is to modify the crossover and mutation operators so that the resultant design remains within the feasible region.

The selection of members for the next generation is done by the Roulette Wheel selection method (Goldberg, 1989) wherein members with high fitness value have a higher probability of getting replicated in the next generation. The selected members are then operated on by one of the two genetic operators, crossover or mutation, both of which have a user-defined probability of occurrence.

The crossover operator has been formulated to use real numbers instead of binary strings. It takes two designs and compares their objective function values. The difference between the corresponding design variables of the two designs is determined and each design variable is given an increment proportional to this difference in the direction of the better design. If this increment results in an infeasible design, the value of the increment is reduced until a feasible design is generated. The amount of increment and the rate of decrease of the increment are parameters that have to be initially specified. This is analogous to the direct search procedure in numerical optimization.

The conventional mutation operator acts on individual bits of the binary string associated with a design variable and inverts the bit if a probability test is satisfied. In effect, it generates a new design from the old design. With real numbers, mutation can be implemented by giving a random fluctuation to the old design variable. If this fluctuation makes the design infeasible, different fluctuations are tried until a feasible design is obtained.

The problem with the modified crossover operation is that if the population has prematurely converged, then no amount of crossover will result in new designs. This can be solved by decreasing the crossover probability and increasing the mutation probability so that new designs are introduced into the population. Once a sufficient number of new designs are generated, the crossover and mutation probabilities are reverted back to the initial values.

The designs resulting from the crossover or mutation operation are evaluated and subjected to the selection procedure. This iteration process continues until a maximum number of generations has been reached.

The application of genetic algorithms to the synthesis of mechanism path has been demonstrated by two four bar problems - one is a five point synthesis (Mariappan and Krishnamurty, 1992) and the other is an 18-point path synthesis problem (Mariappan and Krishnamurty, 1995). Both are path generation problems with prescribed timing. In the five point synthesis problem, the ground link is assumed to be along the x-axis, the origin of the coordinate system is assumed to be at the grounded end of the crank and crank angles at each point is specified. Thus, in Figure 2, x5,x6,x7 and x8 are fixed, making it a six variable optimization problem, the variables being the four lengths of the fourbar and the coupler dimensions. The objective function used for this problem is slightly different from the one stated in (1) for the purpose of comparing the results with that in the existing literature and is given below.

Min F = ------(2)

For the 18-point problem, all the parameters indicated in Figure 2 are variable making it a ten variable problem. It should be noted that x8 is the crank position for the first point and is incremented by 20 degrees for each subsequent point. The design points at various angles for the above problems are given in Tables 1 and 2 respectively.
Points
1
2
3
4
5
Pxis
3.000
2.759
2.372
1.890
1.355
Pyis
3.000
3.363
3.663
3.862
3.943
x8
30
45
60
75
90

TABLE 1: FIVE POINT PATH SYNTHESIS WITH PRESCRIBED TIMING.

The genetic algorithm implementation uses a population size of 100 for a maximum number of 50 generations as it was found that beyond this the improvement in the objective function was negligible. The following control parameters for the genetic algorithm were found to give the best results - Probability of crossover - 0.8, Probability of mutation varying from 0.033 to 0.3 depending on the number of distinct members in the population. Each generation is operated on by either the crossover or the mutation operator. If the standard deviation of the population is below 0.75 (i.e. all members are very close to each other) the chance of a crossover being done is 30% , otherwise, it is 70%.
Points
1
2
3
4
5
6
7
8
9
Pxis
0.5
0.4
0.3
0.2
0.1
0.05
0.02
0.0
0.0
Pyis
1.1
1.1
1.1
1.0
0.9
0.75
0.6
0.5
0.4
0
20
40
60
80
100
120
140
160
Points
10
11
12
13
14
15
16
17
18
Pxis
0.03
0.1
0.15
0.2
0.3
0.4
0.5
0.6
0.6
Pyis
0.3
0.25
0.2
0.3
0.4
0.5
0.7
0.9
1.0
180
200
220
240
260
280
300
320
340

= increment in crank angle from the first point.

TABLE 2: 18 POINT PATH SYNTHESIS WITH PRESCRIBED TIMING.

RESULTS AND DISCUSSION

The results of a sample genetic algorithm solution to the 5-point dimensional synthesis problem using real number representation are shown in Figures 3 - 4. It can be seen from the graph in Figure 3 that the GA rapidly converges to a near optimal solution in the initial generations, but the improvement is marginal towards the later stages. Table 3 shows a comparison of the results of a genetic algorithm using binary representation and one that uses real number representation. Results from the central difference and exact gradient methods are also shown. For the genetic algorithm methods the best value over 5 runs was taken for a population size of 100 and for 50 generations to check for consistency of the results.

It can be seen that the real number representation gives a better optimal result inspite of consuming only a quarter of the time taken by the binary representation for the same number of generations. The optimum value obtained using real numbers is better than that obtained using the central difference gradient based method but the exact gradient result is the best. It should be noted that the optimum design point of both the gradient methods are close to each other while the genetic algorithm gives an optimum design point that is completely different. The same starting design was used in both the gradient based methods thus resulting in designs that were close to each other. The genetic algorithm result does not depend on the starting point and hence it gives a local optimum elsewhere in the design domain. The exact gradient method stands out as the best method for this problem but it is not without its overheads. It requires the calculation of exact gradients of the objective function and constraint equations, which may be time consuming, if not impossible. On the other hand, the genetic algorithm requires only function evaluations thus requiring very little effort by the designer.

FIGURE 3 : BEST MEMBER OVER 50 GENERATIONS FOR REAL NUMBER REPRESENTATION (5-POINT SYNTHESIS).

FIGURE 4 : OPTIMUM SOLUTION USING REAL NUMBER REPRESENTATION (5-POINT SYNTHESIS)

Central Difference
Exact Gradient
Binary Representation
Real No. Representation
NF
233
39
5000
5000
CP
-
-
61.42
16.98
F
0.006276
0.68e-05
0.00412
0.000954
x0
2.90335
2.85813
4.134190
3.509643
x1
1.72989
1.99965
1.1965346
1.857906
x2
3.19002
3.06518
4.415418
4.725835
x3
2.04117
2.46329
2.582629
3.518721
x4
2.61037
2.36818
2.398144
2.503988
x9
0.91682
0.81114
0.829081
0.672018

NF Number of function evaluations

F Objective function value at optimum

CP CPU time (sec) on an IBM compatible 486PC running at 33MHz

TABLE 3 : COMPARISON OF RESULTS FOR 5 POINT SYNTHESIS.


FIGURE 5: BEST MEMBER OVER 50 GENERATIONS FOR REAL NUMBER REPRESENTATION (18 POINT SYNTHESIS).

Table 4 shows a comparison of results for the 18 point synthesis problem. Figure 5 shows the results of the genetic algorithm run with real number representation and Figure 6 shows the optimal mechanism obtained. The real number representation gives the better result among the genetic algorithm methods, and it is the only one that is comparable to the gradient based methods. This problem is more complex than the previous one, in that the design domain is much larger, and hence, the objective function requires many more computations. It can be seen in the graph of Figure 5 that in the later generations the improvement in the objective function is very small which indicates that a large number of function evaluations is required before the genetic algorithm can give a result comparable to that of the gradient based methods. If computation cost is not very significant, a large number of generations can be used to obtain a near optimal result.


FIGURE 6 : COMPARISON OF DESIRED AND GENERATED PATHS(18 POINT SYNTHESIS).

The number of function evaluations required by a genetic algorithm is significantly large, but the evaluation function is relatively simple compared to gradient based methods, which require the computation of derivatives and hessians. It should be noted that each function evaluation in the gradient based methods involves the computation of 'n' first order derivatives of the objective function and 'n' first order derivatives for each active constraint function, where 'n' is the number of design variables. For complex objective functions, the first order derivatives are usually computed by finite difference methods, in which case, the number of function evaluations will be equal to the sum of the number of iterations for each partial derivative. Thus, computationally, the genetic algorithm based approach is comparable to the gradient based methods, with the added advantage that it eliminates the need to compute derivative information, which is quite computation intensive. A comparison of the CPU time consumed by GA with that consumed by gradient based methods has not been made because of non-availability of such data for the gradient based methods.

Overall, the genetic algorithm provides an efficient platform for optimization, involving very little effort on the part of the designer and at the same time giving acceptable results. If accuracy is a high priority then the exact gradient is the best method, but if designer effort is also of concern then the genetic algorithm is a good choice.
Finite Difference
Exact Gradient
Binary Representation
Real No. Representation
NF
505
240
5000
5000
CP
-
-
155.65
37.03
F
0.026542
0.016789
0.089499
0.042958
x0
2.80000
2.85452
2.054872
1.879660
x1
.360000
0.36355
0.504593
0.274853
x2
2.91000
2.91374
2.065737
1.180253
x3
0.49845
0.49374
2.154119
2.138209
x4
2.00000
2.00328
0.663106
0.915611
x5
0.95000
0.95928
0.359508
1.132062
x6
-1.18289
-1.19645
0.013550
0.663433
x7
0.75999
0.76398
0.599007
4.354224
x8
0.51000
0.51172
0.603429
2.558625
x9
1.03345
1.03006
0.071043
3.568086

NF Number of function evaluations

F Objective function value at optimum

CP CPU time (sec) on an IBM compatible 486PC running at 33 MHz

TABLE 4 : COMPARISON OF RESULTS FOR 18 POINT SYNTHESIS.

CONCLUSION

The two problems illustrated, outline the procedure for a genetic algorithm based approach for mechanism synthesis. This approach uses modified genetic operators, which reduces the CPU time drastically and also gives better results. The results of the modified approach has been found to be comparable to that of the existing gradient based methods. The main advantage of the genetic algorithm over the gradient based methods is that it eliminates the computation of derivatives of the objective function and constraint equations. Thus genetic algorithms can be used for mechanism path optimization as an alternative to gradient based methods when the objective function or constraint equations become so complicated that gradient computation becomes difficult or impossible. Furthermore, if the best design obtained is not suitable because of some subjective reason (e.g. crank is too small to manufacture), the designer can choose another design from the population of designs generated. Though the genetic algorithm does not guarantee the global optimum, the probability of attaining the global optimum point increases with increase in the number of function evaluations. The amount of improvement in the objective function value decreases with increase in the number of generations and therefore a trade-off has to be made between the accuracy required and the number of function evaluations.

ACKNOWLEDGEMENT

The authors gratefully acknowledge the support of the National Science Foundation under Grant No. CMS-9402608.

REFERENCES

Abadie, J. and Carpentier, J., 1969, "Generalization of the Wolfe Reduced Gradient Method to the case of Nonlinear Constraints," in Optimization, Gletcher, R. (Ed.), Academic Press, New York.

Arora, J. S., 1989, "Introduction to Optimum Design," Mcgraw-Hill, Inc.

Cleghorn, W. L., Fenton, R. G., and Fu Jing-Fan, 1990, "A General Method for Synthesis of the Coupler Point Path of Planar Four-Bar Mechanisms," ASME Design Technical Conferences, Mechanism Synthesis and Analysis, DE-Vol. 25, pp. 209-215.

Davis, L., 1991, "Handbook of Genetic Algorithms," Van Nostrand Reinhold, New York.

Erdman, A. G., and Sandor, G. N., 1991, "Mechanism Design - Analysis and Synthesis," Vol 1, Prentice Hall, New Jersey.

Fang, E. W., 1994, "Simultaneous Type and Dimensional Synthesis of Mechanisms by Genetic Algorithms," Mechanism Synthesis and Analysis - DE- Vol 70.

Filho, J.L.R., Treleaven, P.C. and Alippi, C., 1994, "Genetic-Algorithm Programming Environments," Computer June 1994.

Freudenstein, F., 1955, "Approximate Synthesis of Four-bar Linkages", Trans. ASME, Vol. 77, pp 853-859.

Goldberg, D. E., 1989, "Genetic Algorithms in Search, Optimization and Machine Learning," Addison-Wesley Publications.

Holland, J. H., 1975, "Adaptation in Natural and Artificial Systems," University of Michigan Press, Ann Arbor, Michigan.

Hooke, R., and Jeeves, T. A., 1961, "Direct Search of Numerical and Statistical Problems," Journal of Association of Computing Machinery, pp. 212-229.

Kirkpatrick, S., Gelatt, C.D. and Vecchi, M.P., 1983, "Optimization by Simulated Annealing," Science, Vol. 220, pp. 671-680.

Mariappan, J. and Krishnamurty, S., 1992, "Using Exact Gradients in Mechanism Design," ASME Design Technical Conferences, Mechanism Synthesis and Analysis - DE-Vol.44-2.

Mariappan, J. and Krishnamurty, S., 1993, "Application of the exact gradient approach in optimal synthesis of mechanisms," ASME Design Technical Conferences, Advances in Design Automation - DE-Vol 65-1.

Mariappan, J. and Krishnamurty, S., "A Generalised Exact Gradient Method for Mechanism Synthesis," In press, Mechanism and Machine Theory.

Nelder, J. A., and Mead, R., 1965, "A Simplex Method for Function Minimization," Computer Journal, Vol. 7, Wiley, New York.

Rosen, J. B., 1960, "The Gradient Projection Method for Nonlinear programming Part I, Linear Constraints," SIAM Journal of Applied Mathematics, Vol. 8, pp. 181-217.

Rosen, J. B., 1961, "The Gradient Projection Method for Nonlinear programming Part II, Nonlinear Constraints," SIAM Journal of Applied Mathematics, Vol. 9, pp. 514-532.

Srinivas, M. and Patnaik, L. M., 1994, "Genetic Algorithms: A Survey," in Computer June 1994.

Stoer, J., 1985, "Principles of Sequential Quadratic Programming Methods for Solving Nonlinear Programs," in Computational Mathematical Programming, Schittkowski, K. (Ed.), NATO ASI Series, Vol. 15, Springer-Verlag, Berlin.

Suh, C.H., and Radcliffe, C. W., 1978, "Kinematics and Mechnaism Design, John-Wiley &Sons, New York.

Wolfe, P., 1963, "Methods of Nonlinear Programming," in Recent Advances in Mathematical Programming, Graves, R. L., and Wolfe, P. (Eds.), Mcgraw-Hill, New York, PP. 67-86.