&nsbp;
Search :
Spring 1993

Research Magazine > ARCHIVE > Spring 93 > Article

Artificial Intelligence Authentic Innovation
by David L. Hart

If two heads are better than one, a scientist would probably enjoy taking an extra brain to work. The spare brain could toil away at a research project long after the researcher slips home for supper.

Of all the gadgets in the scientific toolbox, computers are the closest thing to an extra brain that researchers can muster so far. And, frankly, computers just don't measure up.

Sure, a computer can instantly recall billions of bits of data, process them and flash them up on a screen.

But the human mind can still do one thing computers can't even dream about: Make a judgment call.

All a computer can do is what some human being programmed it to do.

So far, anyway.

An eclectic assembly of University of Georgia scientists is working to change that. They are teaching computers to learn from their own mistakes and answer questions computers have never faced before.

In laboratories across the campus, scientists are putting "neural networks" -- named after the neurons in the human brain -- to work on problems as disparate as how to identify a specific carbohydrate molecule from thousands of potential patterns, or how to detect a ripe tomato.

Those are tasks that traditional computers just cannot do. Unlike humans, most computers cannot cope with situations they are not explicitly programmed to handle. When something doesn't fit the program, it simply "does not compute."

Neural networks are different. They do more than just crunch numbers. They use experience, almost like people, to deal with new situations.

So far, scientists have found at least two ways in which artificial neural networks and human brains are similar:

  • Both work surprisingly well at diverse tasks that pose problems for traditional computing techniques;
  • And no one knows precisely how either one works.

"Neural networks are still an art," said Dr. Ron McClendon, professor of biological and agricultural engineering. "You have to experiment some."

That experimenting has engaged scientists from a variety of academic disciplines -- and it has put computers to work in ways that have never been tried before.

For one, researchers at the university's Complex Carbohydrate Research Center are using computers to identify molecules by something akin to a fingerprint. Instead of a detective's fingerprint pad, computers scan a "fingerprint" based on the magnetic properties of the molecular nuclei, called the nuclear magnetic resonance (NMR) spectrum.

Before the scientists can find out what a particular carbohydrate does, they may spend as much as a year figuring out how the molecule is put together. The NMR fingerprint is their first clue, according to Dr. Bernd Meyer, associate professor in biochemistry at the CCRC.

The natural first step is to compare an unidentified molecule with others in a file of other molecular fingerprints. However, matching one fingerprint at a time from thousands on file takes a long time for computers and scientists alike. Searching the fingerprint file by name is easy, but to do that, scientists must first identify the mystery molecule's structure -- which was the original problem.

Molecular Fingerprints

Conventional computer methods can help, but despite being slow, they are inflexible. Fingerprints are often "smudged," or filled with analytical noise from equipment pushed to its sensitivity limits. If the files contained a perfect match, computers could find it, but smudged prints give them fits.

A more useful file-matcher would handle these smudged prints. And if it couldn't find an exact match, it would be nice if it returned the closest matches, which provide important clues for the molecular detective. Ideally, it would be even nicer if it could distinguish between one version of a molecule that has nearly the same structure as another, but has a side group that points in a different direction.

This subtle property, called "chirality," can turn a useful medication into a harmful one. Not even skillful scientists can detect this difference from the NMR fingerprints alone.

After reading about neural networks in the journal Science, Dr. Jan Thomsen, a project director at the CCRC, decided that they might solve the researchers' circular dilemma.

"I was maybe gambling a little on this one," Thomsen said. But his gamble paid off. "It was much more successful than anyone had anticipated."

Meyer and Thomsen stored their NMR spectra in a neural network database system they call the ANalytical CHemical Object Verification sYstem, or ANCHOVY for short.

ANCHOVY matches mystery fingerprints in a fraction of a second, and when there is no exact match, it returns the closest matches. Plus, it can track down a match from a fingerprint that has more noise than humans can deal with. In fact, it can be trained to identify molecules from spectra that have up to 25 times the noise usually allowed, Meyer said.

ANCHOVY can even do scientists one better: It can learn to use fingerprints to differentiate molecules by their chirality.

"It has always worked better than you would expect from a scientist," Meyer said. "We all have been surprised over and over again at how easy it is and how powerful it is."

The major stumbling block for the system is training large databases. ANCHOVY may take four days to memorize 1,000 molecules. Days can seem like forever in computer time, but ANCHOVY's networks only have to be trained once. Afterwards, it has split-second recall.

However, to be useful to a pharmaceutical company, Meyer said, the system would have to store upwards of 100,000 fingerprints, which would make training unmanageable.

Meyer and Thomsen solved their problem by breaking the database into manageable chunks. For 10,000 fingerprints, the database uses 10 networks. ANCHOVY then rapidly searches the 10 separate databases to find the closest matches.

Thomsen pointed out that separate databases makes logical sense as well, since there is no real reason to store, for example, carbohydrate and steroid fingerprints in the same database.

Meyer sees great potential for ANCHOVY in storing a lab's collection of fingerprints. "Every chemical research lab in the world would want such a system," he said.

While the CCRC researchers found that neural networks could solve a very complex task, others in the College of Agricultural and Environmental Sciences used them to solve an easy one -- in fact, so easy for people that it seems boring, repetitive and tailor-made for a computer.

There was just one problem: Conventional computer methods couldn't do it.

The task is "egg candling," the process of scanning eggs for cracks and other defects. For every egg you buy in the supermarket, a pair of eyes at a processing plant has made sure it is crack-free. To find defects, each egg is placed in front of a light -- it used to be a candle -- which highlights cracks, dirt stains and blood spots.

Two human candlers might inspect 240,000 eggs a day. To give people a break, so to speak, Dr. Ron McClendon, Dr. John Goodrum and graduate student Veren Patel taught a neural network to find cracks in the computerized image of a candled egg, which the machine learned to do better than a novice human candler.

Based on the success of this network, they are developing networks for other applications, such as detecting dirt stains and blood spots.

The egg candling network takes about an hour to train, using a set of 90 good and 90 cracked eggs. The machine examines all 180 eggs each second, so in an hour it looks at 650,000 eggs, give or take a few thousand.

Of course, once it is trained, the network takes a fraction of a second to compute its result. But if it only trains with 180 eggs, how could it possibly learn to distinguish a quarter million different eggs a day?

The answer is, it doesn't.

People don't either, for that matter. A human training for this job wouldn't memorize 180,000 or even 180 eggs. Instead, by trial and error a person learns to judge whether an egg is cracked by learning what cracks look like in general.

That's the concept that makes neural networks so novel; they can learn to generalize from 180 eggs to just about any egg.

It also sets the egg-candling network apart from the ANCHOVY database. Both projects use the same type of network and learn by trial and error, but the egg-candling system learns to generalize and ANCHOVY learns to memorize.

DOWN-TO-EARTH APPLICATIONS

To attack a completely different problem with a computer normally requires writing a new program from scratch. As ANCHOVY and the egg-candling network show, however, the same basic components can learn different tasks. Only the training changes.

Using the same variety of network from the egg-candling project, McClendon has addressed other projects which also require the computer to make a judgment call.

Being able to generalize allows neural networks to surpass more common statistical models. For example, a computer simulation can determine the ideal greenhouse temperature for a given set of conditions.

But the simulation is slow -- too slow to be implemented in a small, quick computerized thermostat.

So McClendon, with Dr. Ido Seginer, a visiting professor from the Technion in Israel, taught a neural network to remember what the simulation calculates for each situation.

The network short cuts the simulation and recalls the right temperature quickly. In projects with Dr. Gerrit Hoogenboom, assistant professor of biological and agricultural engineering at the Georgia Agricultural Experiment Station in Griffin, McClendon has used neural networks to improve the performance of traditional statistical models for forecasting solar radiation and for predicting crop development.

In another agricultural application, a neural network suggests when to spray a crop with pesticide and seems to outperform an expert system for the same task, McClendon said.

But the average person doesn't often make judgment calls about the amount of solar radiation or crop development. "Real" people have other problems to worry about, like finding ripe vegetables at the grocery store.

Dr. Chi Thai, associate professor of biological and agricultural engineering at the Griffin experiment station, said he thinks neural networks have potential here, too. His goal is to teach them how shoppers choose fresh produce in grocery stores. "Sure, everyone wants to buy a tomato. But exactly how red?" he asked.

It's not just a rhetorical question. If his networks can mimic just how ripe shoppers like their tomatoes -- and do that in conjunction with mathematical models that predict how long it will be until they get that ripe -- grocery stores can arrange to have tomatoes that will have the best color.

One of Thai's networks maps measured values for tomato color to subjective consumer preference scores. Another network has learned to differentiate between the `Sunny' and `Sunbeam' varieties of tomato by their color spectra, and yet another has attempted to predict how fast tomato color will change from green to red. (This one only succeeded for one variety -- for reasons Thai can't quite explain.)

Traveling Salesmen and the Human Genome

Simple stuff, you say? Not on your life. Try explaining your mental processes when you choose ripe tomatoes. Better still, explain how you would find the shortest route that goes through every city in a given region and get back to the starting point. Researchers call it the traveling salesman problem.

On the surface, this problem doesn't seem to have much to do with genetics. But for Dr. Jonathan Arnold, associate professor of genetics and statistics, finding the salesman's route is akin to constructing a map of the human genome, and he's using a neural network to tackle the problem.

"The most important medical diagnostic tool of the 20th century will be a map of the human genome," Arnold said. "About one-third of all human diseases have a genetic basis."

With that much at stake, how do you go about drawing the map?

To study long DNA strands, researchers break them into fragments they can examine. They are then left with the immensely complex task of reassembling the pieces -- in exactly the right order. A wrong turn for the salesman is the same as a misplaced gene in the genome.

"It would be nice to have an idea of where the fragment came from in the chromosome," Arnold said. "You're usually only interested in one little piece."

Dr. Hubert Chen, a professor of statistics, and statistics graduate student Momiao Xiong are working with Arnold on the DNA mapping problem. They have devised a new neural network model that addresses these so-called "optimization" problems. Its success in solving the DNA mapping problem could mean that a whole class of "unsolvable" problems might also be addressed more quickly and more precisely with neural networks.

But Chen and Xiong have set their sights even higher. They want to know exactly how neural networks do what they do. Until now, that's been a mystery.

"Roughly speaking, the neural network is a very complicated (mathematical) function," Chen said. "It uses some kind of learning process and empirical knowledge to predict the future or predict an unknown situation."

If a neural network is a mathematical function, it should be possible to figure out how the machine learns. In fact, Chen and Xiong eventually want to present a unified learning theory that explains how all the varieties of neural networks work. This would help researchers understand when and how neural networks would apply to particular situations.

"Because this neural network is new to most fields, it can be used in genetics, biology, computer science, statistics, education, engineering, business -- almost anything you can name," Chen said. "Because the real world is so complicated, there is no [single] model that can fit."

The mystery of neural networks isn't solved yet. Even after much success building and training neural networks, McClendon and other researchers know them only as well as most drivers know their cars. They can fill up the tank, turn the key and drive, but the details of internal combustion remain hidden under the hood.

Researchers and neural network "artists" must know when networks might work and know what their limits are. When they build these little brains, researchers still must decide how many nodes to use. Too few, and the network cannot discriminate well enough; too many, and the network starts to memorize instead of generalize.

They must also consider when to stop training, since that also affects whether a network generalizes or memorizes. When Thai started using neural networks, he fed them all sorts of information. But networks, unlike people, aren't smart enough to filter the junk from the relevant information. A network bases its results on everything it has available. So rather than being an extra brain in a researcher's toolbox, networks might be more like a sophisticated hammer.

"I consider them just another tool, like statistical regression," Thai said.
Neural networks aren't ready to replace people just yet. Although they were modeled after the brain because of its flexibility and power, a neural network is actually the ultimate one-track mind.

"Our brain is much more complicated than the computer," Chen said. "It is our computer. There is a similarity, therefore we can use the computer to solve some real complicated problems."

The brain still outperforms neural networks, but Chen is optimistic about networks.

"We think we can get very close," he said.


David L. Hart is a graduate student at UGA's Henry W. Grady College of Journalism and Mass Communication. He earned bachelor's and master's degrees in computer science at the Georgia Institute of Technology and Carnegie-Mellon University, respectively.

Research Communications, Office of the VP for Research, UGA
For comments or for information please e-mail the editor: rcomm@uga.edu
To contact the webmaster please email: ovprweb@uga.edu