Avigator uses a neural network to develop a model of the control strategies of the
human test pilots. This work continues the work done by Dr. Nechyba on modeling of human
control strategies in driving. We are implementing a cascade neural network architecture,
but up to this point (straight and level flight only) we have only needed a linear
architecture to model the human control with very low error.
The actual learning takes place on the ground. Before training, the data undergoes some
preprocessing. First, sections of data representing the flight regime of interest (e.g.,
straight-and-level) are isolated. Next, the data is resampled to regular time intervals
(0.1s). Then the heading data is changed into delta-heading data. Finally, all data are
scaled to the range -1 to +1. Input to the neural network consists of a time history of
control and sensor information extending to t-3. After training, the model is transferred
to the 386 for execution during flight.
For further reading on human skill modeling check out Dr. Nechyba's Ph.D. thesis
The onboard 386 computer has two primary jobs. During the data-collection stage, it
collects data through the serial ports from the HC11 and the compass and saves this data
to disk with a timestamp.
During the execution phase, the 386 also preprocesses the data (resampling and scaling)
and runs the data through the trained neural network. The resulting control commands are
then sent to the HC11 through the serial port.
The HC11 is responsible for reading most of the sensor values and transmitting them to
the 386 by serial port. It also decodes the servo-control PWM signals from the receiver
and sends them to the 386. In the execution phase it also creates PWM signals for the
servos, from the control data sent by the 386.
Listings of the software will eventually be posted here.
Back to Avigator
Revised 12/4/99