Neurocomputing6 (1994) 45-55 Elsevier
N E U C O M 242
Object-oriented backpropagation and its application to structural design S.L. Hung and H. Adeli* Dept. of Civil Engineering, The Ohio State University, 470 Hitchcock Hal~ 2070 Neil Ave., Columbus, OH 43210-1275, USA Received 21 February 1992 Revised 20 January 1993
Abstract A multilayer neural network development environment, called ANNDE, is presented for implementing effective learning algorithms for the domain of engineering design using the object-oriented programming paradigm. It consists of five primary components: learning domain, neural nets, library of learning strategies, learning process, and analysis process. These components have been implemented as five classes in two object-oriented programming languages C + + and G + + . The library of learning strategies includes generalized delta rule with error backpropagation. Several examples are presented for learning in the domain of structural engineering. Keywords. Backpropagation; object-oriented programming; structural design.
Considerable research activity has been reported in the literature on the development of design knowledge-based expert systems using artificial intelligence techniques [2, 13]. However, true intelligence is often associated with learning. While research on machine learning techniques has been in progress for a number of years, few papers have reported their application to engineering design. Adeli and Yeh  report the development of an unguided learning system in the domain of structural design using the approach of explanation-based learning . They developed a prototype system, called Structural Design Learning System (SDLS), in a combination of Prolog and Pascal languages. Adeli and Yeh  presented a model of machine learning for engineering design based on the concept of self-adjustment of internal control parameters and perceptron . They cast the problem of structural design in a form that can be described by a perceptron without hidden units. Hung and Adeli [10, 11] extended that work by developing a two-layer network, that is a neural network with a hidden layer. The learning performance and convergence speed of * Corresponding author. 0925-2312/94/$07.00 (~) 1994 - Elsevier Science B.V. All rights reserved
s . z . Hun#, H. Adeli
these two perceptron models were compared and discussed. Several learning strategies associated with artificial neural networks, such as supervised learning using backpropagation , reinforcement learning , and unsupervised or competitive learning , have been proposed for classification problems. These learning approaches are based on changing the weights of links connecting the nodes. In contrast, Fahlman and Lebiere  propose changing the neural structure (topological change). In this research, we are interested in integrating different learning techniques with n-tuple multilayer artificial neural networks and developing more effective learning algorithms for the domain of engineering design in general and structural design in particular. Object-oriented programming (OOP) has received increasing attention in software engineering. A new software development technique associated with OOP, called object-oriented design (OOD), has been proposed and used in software engineering. In the domain of structural engineering, Adeli and Hung  presented an object-oriented model for processing of earthquake engineering knowledge. The model has been implemented in C + + in a prototype system, called OQUAKE. Knowledge representation in OQUAKE is through a combination of frames and scripts. In this work, an artificial neural network development environment (ANNDE) has been developed using the OOP paradigm. The generalized delta rule with backpropagation learning strategy associated with a multilayer artificial neural network has been used in ANNDE.
2. A n artificial neural network development environment - A N N D E
Our objective is to develop an integrated model of machine learning using various learning strategies with n-tuple multilayer artificial neural networks for engineering design applications. The model consists of five primary components: learning domain, neural nets, learning strategies, learning process, and analysis process. The function of each block is described in the following subsections. 2.1 Learning domain
It includes the input and output patterns for each training instance. For example, in the steel beam design problem [4, 11], each steel beam was described by five behavior components: the member length, unbraced length, maximum bending moment in the member, maximum shear force, and the bending coefficient. Each steel beam was classified as a member of nine different groups of wide-flange shapes commonly used in steel structures. Therefore, in this case, we have five input and one output patterns. 2.2 Neural nets
It provides the structure of artificial neural networks depending on the number of hidden layers selected by the user. A complete topology of artificial neural networks is the combination of input layer, hidden layers, and output layer.
2.3 Learning strategies It includes various learning procedures such as supervised learning, reinforcement learning, and competitive learning. This component is the kernel of the system.
2.4 Learning process It performs learning using one of the learning procedures, such as backpropagation learning. The knowledge of artificial neural networks is represented by real values, called weights, assigned to links connecting the nodes. Therefore, learning process in artificial neural networks is the process of changing the values of the weights and reducing the system error to a certain prescribed value.
2.5 Analysis process After learning is achieved and the corresponding weights are obtained, the analysis process is used to verify the learning performance and, if necessary, to perform further iterations to improve the learning performance. Object-oriented languages are a new generation of programming languages. The fundamental concepts of OOP are objects, classes, derived classes, and inheritance. In addition, it provides a set of techniques for the development of application programs that are reusable, extensible, and compatible. These properties are essential for development of engineering software. OOP provides new ways to structure solutions and a means of directly representing important relationships between objects when dealing with complex problems. Instead of decomposing problems as data and dealing with data, in the OOP approach, problems are analyzed in terms of objects and the relationships among objects. The OOP paradigm provides a highly modular, flexible, and efficient software development environment. In this work, the five aforementioned components of ANNDE are implemented as classes called class LD, NN, LS, LP, and AP, as shown in Fig. 1. The functions of five classes are the same as the functions of their corresponding components described previously. In Fig. 1, the solid arrow lines between classes indicate the relationship between base classes and derived classes. For instance, class LD is the base class of class NN, classes LP and AP are derived from class LS, and class LP is a friend class of class AP. The dotted arrow lines indicate the data flow in the system. Class LD obtains the patterns of input and output, class NN represents the structure of neural networks, class LS provides various learning strategies, class LP performs the learning and stores the knowledge (weight) for each link, and class AP retrieves the learned neural structure and analyzes the given instances.
2.6 Implementation ANNDE has been implemented in two different object-oriented programming environments: G + + (GNU C + + ) on a SUN-SPARC workstation and in C + + on a SUN-4 workstation.
S.L. Hung, H. Adeli Input&output ~._1 dass LD [ ANNDE [ Learning domain [
class LS Library of learning strategies
Learning process [
Fig. 1. The architectureof ANNDE.
3. Application to structural design ANNDE has been used for learning in the domain of structural engineering. An acceptable design must satisfy the requirements of a design code such as the American Institute of Steel Construction (AISC) Load & Resistance Factor Design (LRFD) specifications  for design of steel structures and American Concrete Institute (ACI) code  for design concrete structures.
3.1 Example I This example is a load location problem taken from Vanluchene and Sun . A simplysupported beam is subjected to a 4-unit concentrated load. The beam is assumed to have a length of one unit. The exact shape of the bending moment diagram for the simply-supported beam under the concentrated load is determined by the location of the load. Treating this as a pattern recognition problem, ANNDE is used to learn to recognize the shape of the bending moment diagram for any location of the concentrated load. We know that the maximum bending moment is at the location of the concentrated load. We want to see whether ANNDE can learn this piece of knowledge. A three-layer neural network (with two hidden layers) was used to learn this problem. The number of nodes in the input layer, the first and second hidden layers, and output layer are 11, 5, 5, and 1, respectively (Fig. 2). The eleven input nodes represent the values of the bending moment at the locations 0 to 10 (the beam is divided into 10 equal segments for the purpose of bending moment calculation). The output node represents the location of the concentrated load. The locations of instances are given from the left support as a function of the span length, L.
Object-oriented backpropagation Location node
Fig. 2. Neural networks for the load location problem.
Training six instances took 145 seconds on a SUN-SPARC workstation for 77 = 0.7 and = 0.9 and using a tolerance value of 10 -5 for the system error (E). The convergence curve for the learning system error is shown in Fig. 3. From this figure, we observe that there is no local minimum or stationary point in the learning process and the learning converges to the prescribed tolerance limit fast. After training the system, eight instances including the six trained instances and two new untrained instances were used to verify the learning performance. The average learning error for the six trained instances is about 0.6%. The average learning error for the two untrained instances is about 10% which is much higher than the learning error for the trained instances. But, the learning performance can be improved by additional training.
3.2 Example 2 This example is a rectangular concrete beam problem taken from Vanluchene and Sun [ 16]. A rectangular concrete beam requires five input data: Mu (ultimate bending moment), fy (yield stress of reinforcing steel), f~ (concrete compressive strength), p (reinforcement ratio), and b/d (width-to-depth ratio of the rectangular section), and one output, d (depth of the rectangular concrete beam). A two-layer neural network was used to learn this problem (Fig. 4). This neural network consists of 5 nodes in the input layer, 5 hidden nodes, and one node in the output layer. Twenty-one training instances were provided to the neural network for learning the concrete beam problem. In order to study the influence of learning and momentum ratios in the learning process, four different pairs of learning and momentum ratios (77, c~) were used: (0.7,0.9), (0.9,0.9), (0.9,0.95), and (0.95,0.95). A tolerance of 10 -5 was used for the system error.
S.L. Hung, H. Adeli
System error E x l 0 -3
105.00 100.00 95.00 90.00 85.00 80.00 75.00 70.00 65.00 60.00 55.00 50.00 45.00 40.00 35.00 30.00 25.00 20.00 15.00 10.00 5.00 0.00 -5.00
~o.x 103 0.00
Fig. 3. System error for the load location p r o b l e m ( ~ = 0 . 7 , a = 0 . 9 ) .
~i~i i i i i i :i ::::: if!!.!:=: Input nodes
r e i n f o r e d n g steel (As)
: Ultimate b e n d i n g m o m e n t
: Reinforcing steel yield strength (ksi) : Concrete 28-day compressive strength (ksi)
: Concrete b e a m reinforcement ratio ( A s / b d )
Hg. 4. Neural networks for the concrete beam design problem.
Four different sets of learning and momentum ratios were chosen. The rate of learning increases for the four sets of learning and momentum ratios in the following order: (0.7,0.9), (0.95,0.95), (0.9,0.95), and (0.9,0.9). The average learning time for the four sets are 4.2, 3.8, 3.5, and 3.1 hours on a SUN-SPARC workstation, respectively. From the figures of system error, we observe the existence of local minima and stationary points in learning this complex problem. For instance, stationary points are observed as the system error for the smallest pair of a and r/shows slow convergence or remains constant over a number of iterations. On the other hand, the problem of local minima (jumps) is observed with the larger values of pair of a and r/. The system error for (0.9,0.9) is shown in Fig. 5. Systenl
E x 10-3 err99 50.00
35.00 - -
25.00 - -
5.00 - - -
0.00 x 103 000
Fig. 5. System error for the rectangularconcrete beam design (~/= 0.9, ot = 0.9). After the neural network was trained, 31 instances, including 21 trained and ten untrained instances, were used to verify the learning performance. The average learning error percentages of the 21 trained instances and the ten untrained instances are about 0.26 and 0.24, respectively. Since the learning performance is acceptable in this example, no more additional training is required to train the neural network.
3.3 Example 3 The final example created in this research is selection of a minimum weight steel beam
S.L. Hung, H. Adeli
from the American Institute of Steel Construction (AISC) Load and Resistance Factor Design (LRFD) wide-flange (W) shape database  for a given loading condition. Adeli and Yeh  and Hung and Adeli  divided the W shapes available into t groups in decreasing order of the plastic section modulus Z~:. PERHID developed by Hung and Adeli  could learn a satisfactory design and identify its group number only. In this work, ANNDE is used to learn to select the lightest W shape among all the available shapes instead of selecting a satisfactory group. Each instance consists of 5 input patterns: the member length (L), the unbraced length (Lb), the maximum bending moment in the member (Mmax), the maximum shear force (Vmx), and the bending coefficient (Cb). The output pattern is the plastic modulus (Zx) of the corresponding least weight member. A three-layer neural network with two hidden layers was used to learn this problem (Fig. 6). The numbers of nodes in the input layer, the first and second hidden layers, and the output layer are 5, 5, 3, and 1, respectively. The learning and momentum ratios are chosen as 0.7 and 0.9, respectively. The system error is limited to 10-5 . L
Cb - - ~ ( Input nodes
: this b e a m H.67 k-ft ~ds = 81.67 k-ft
~ ~ ] ~ ' l ~ ' ~ ~ ~ ' - M a x _ s h e a r
= 35 kips
Fig. 6. Three-layer neural network for the minimum weight steel beam problem.
Ten instances were used to train this neural network. After the learning process was completed, six verification instances were used to verify the learning performance. The
system error is shown in Fig. 7. Since this is a complex learning domain, the learning process took a long time (8.5 hours on average on a SUN-SPARC workstation) to converge and the problems of local minima and stationary points were observed (see Fig. 7). System
E n 10 -3 30.00
28,00 - 26.00 24.00 ---22.00 -20.00 18.00 - 16.00 . . . . 14.00 .
l 8.00 6.00 4.00 2.00 0.00 0 O0
Fig. 7. System error for the minimum weight steel beam problem
no. x 103
4. Conclusions An artificial neural network development environment (ANNDE) has been developed using the object-oriented programming paradigm. It has been implemented in C + + and G + + programming languages. Generalized delta rule with backpropagation learning strategy has been implemented in ANNDE. Based on this work, following observations and conclusions can be drawn: (1) Backpropagation learning strategy can be applied to both simple and complex problem domains. However, for complex problem domains, it needs a long training time. (2) Integrating ANNDE with a knowledge-based expert system for structural design can provide the capacity for creating an intelligent integrated structural design system with automatic learning .
S.L. Hung, H. Adeli
(3) Backpropagation learning process is a gradient descent method. Like search problems, the learning system may get trapped in some local minimum and stationary point, or may oscillate between such points. We are currently investigating how to accelerate the learning process by using other search algorithms in ANNDE, such as the conjugate gradient method.
Acknowledgement This research has been supported by a grant from The Ohio State University Research Challenge Program.
References  ACI, Ultimate Strength Design Handbook, American Concrete Institute, SP-17 (73) (Detroit, MI, 1988).  H. Adeli, ed., Expert Systems in Construction and Structural Engineering (Chapman and Hall, London, 1988).  H. Adeli and S.L Hung, An object-oriented model for processing earthquake engineering knowledge, Microcomput. in Ovil Engrg. 5 (2) (1990) 95-109.  H. Adeli and C. Yeh, Perceptron learning in engineering design, Microcomput. in Civil Engrg. 4 (4) (1989) 247-256.  H. Adeli and C. Yeh, Explanation-based machine learning in engineering design, Engineering Applications of Artificial lntelligence 3 (2) (1990) 127-137.  H. Adeli and C. Yeh, Neural network learning in engineering design, Proc. Internat. NeuralNetwork Conf., Vol. 1, Paris, France (July 9-13, 1990) 412-415.  AISC, Manual of Steel Construction - Load and Resistance Factor Design, American Institute of Steel Construction (Chicago, IL, 1986).  G.A. Carpenter and S. Grossberg, The ART of adaptive pattern recognition by a self-organizing neural network, IEEE Comput. 21 (3) (1988) 77--88.  S.E. Fahlman and C. Lebiere, The Cascade-Correlation learning architecture, Technical Report CMU-CS90-100, CIS Dept., Carnegie Mellon Univ., Pittsburgh, PA, 1990.  S.L. Hung and H. Adeli, A model of perceptron learning with a hidden layer for engineering design, Neurocomputing 3 (1) (1991) 3-14.  S.L. Hung and H. Adeli, A neural network environment for intelligent CAD, in: H. Adeli and R.L. Sierakowski, eds., Mechanics Computing in 1990's and Beyond - Vol. One - Computational Mechanics, Fluid Mechanics, andBiomechanics, American Society of Civil Engineering, New York (1991) 93-97.  Y. Kodratoff, Machine learning, in: H. Adeli, ed., Knowledge Engineering - Vol. One - Fundamentals (McGraw-Hill, New York, 1990) 226-255.  S. Ohsuga, Knowledge processing and its application to engineering design, in: H. Adeli, ed., Knowledge Engineering- VoL Two -Applications (McGraw-Hill, New York, 1990) 300-339.  E Rosenblatt, Principles of Neurodynamics (Spartan Books, New York, 1962).  D.E. Rumelhart, G.E. Hinton and R.J. Williams, Learning internal representation by error propagation, in: D.E. Rumeihart et al., eds., Parallel Distributed Processing (MIT Press, Cambridge, MA, 1986) 318-362.  R.D. Vanluchene and R. Sun, Neural network in structural engineering, Microcomput. in Civil Engrg. 5 (3) (1990) 207-215.  RJ. Williams, On the use of backpropagationin associative reinforeement learning, 1EEEInternat. Conf. on NeuralNetworks, Vol. 1 (1988) 263-270.
Hojjat Adefi received his Ph.D. from Stanford University in 1976. He is currently a professor of engineering and member of the Center for Cognitive Science at The Ohio State University. A contributor to over 30 research and scientific journals he has authored over 230 research and scientific publications, incuding several books, and edited ten books in various areas of computer science and engineering. Professor Adeli is the Editor-in-Chief of the journal Integrated Computer-Aided Engineering. He has been an organizer or member of the advisory board of over 25 national and international conferences and a contributor to over 80 other conferences held in 23 different countries. He was a Keynote and Plenary Lecturer at international computing conferences held in Italy (1989), Mexico (1989), Japan (1991), China (1992), Canada (1992), Portugal (1992), and USA (1993). He has received numerous academic, research, and leadership awards, honors, and recognitions. His recent awards include The Ohio State University College of Engineering 1990 Researeh Award in Recognition of Outstanding ResearehAceomplishments and Lichtenstein Memorial Awardfor Faculty Excellence. In 1990 he was selected as Man of the Year by the American Biographical Institute.
S.L Hung received his M.S. and Ph.D. from The Ohio State University in 1990 and 1992, respectively. He is currently an Assistant Professor at the National Chiao Tung University, Taiwan, Republic of China. He has authored thirteen papers in the areas of expert systems, neural networks, machine learning, and parallel processing,