Tractor simulator bridges the gap between operator and computer

Simulators are helping engineers finally figure out how best to get your brain and the tractor’s brain working together

It’s a good run with the tractor this time. The driver has no trouble piloting a straight line across the field, the machine is humming happily, and the display monitor confirms that all is well. The discs are cutting nicely through the field trash too, and the seed is getting dropped right on target.

Then reality returns. The “tractor” stops, the room lights come up, and the driver steps out of the cab and into a small theatre that has a white curved screen where, only a few seconds ago, it had looked like there was a field.

Behind the cab there’s another wall separating the theatre from a bank of computers where a group of grad students and technicians is tweaking the simulation program that runs the on-board monitor as well as the projectors that cast the moving image of the field onto the screen.

It’s the tractor simulator in the biosystems engineering department of the University of Manitoba. Unlike an airplane simulator where a pilot sits in a closed cockpit and looks into a landscape shown on the simulated windows, the operator sits in an actual tractor cab surrounded by a projected farmscape.

The research job is to find the best way to divide the different tasks between the two operators — the human at the wheel and the computer in the wiring of the machine. There are some tasks that computers are really good at but, despite their growing power and capabilities, there are still jobs better suited to the human brain. This project is to look at what goes on during the seeding operation, strike the balance between who should look after what, and devise the best way for operator and machine to talk to each other.

“Right now we’re looking at automating various subsystems,” explains Danny Mann, head of biosystems engineering. “We need to know what is the right level of automation that keeps the human in the control loop so that if a problem arises, the human can still get up to speed very quickly and know what to do to take the corrective action.”

man in tractor simulator

Computers are great at repetitive functions, but humans outscore them for agility and adaptability. For engineer Danny Mann, the challenge is to mesh the two brains so one plus one equals three.
photo: Supplied

Farmers from a few generations back worked with a semi-autonomous machine called the horse that had a brain and a complex sensory system. A few thousand years of selective breeding gave us the draft model for pulling implements either by itself or in teams. It could be programmed with a few simple commands: go (giddy-up), stop (whoa), turn right (gee) or turn left (haw). An experienced horse could walk a reasonably straight line at a constant speed so it could pull a small seed drill up and down a field leaving the farmer to manage the drill and make the occasional correction. The farmer would take over on the big turns and get the drill oriented along the line again.

With the development of the tractor, the farmer had complete control of speed and direction. Today’s modern machines can haul big, wide implements up and down enormous acreages with no breaks for feed and water.

One problem for operators of these larger and faster machines, however, is dealing with huge amounts of information coming in from all corners. Unless you have another set of eyes set in remote places, you can’t see trouble coming or react to it when it gets there.

With myriad small procedures making up the seeding operation, we now need to split these assignments between the machine’s electronic silicon brain and the operator’s organic wet one.

“A computer is very good at doing routine or mundane tasks,” Mann says. “We’ve talked about auto steer where we’re using GPS so we can program in the width of the machine and the boundaries of the field. The auto steer gets the information from the GPS satellites, does quick calculations and decides whether to turn the steering wheel three degrees to the left or five degrees to the right. It can be doing that same calculation once every millisecond.”

The human, on the other hand, is more instantly adaptable. Computers require programming, which means you’re presenting a series of logical rules that the computer must follow. For example, you can program a computer to recognize a coffee cup and then program it to fill the cup if it’s empty.

Computers are binary creatures. The program has to give them a choice of one action or another because they can’t handle any more then two options at a time. This is called binary logic based on an IF/THEN/ELSE scenario. You tell the computer: IF you see a 10-ounce container with a handle on the side and it is empty THEN you pick it up and fill it with nine ounces of coffee ELSE ignore it and move to the next object.

Problems arise when the task becomes somewhat more complicated, for instance if the light changes. When the shadows change, the computer sees a different shape. A human still knows it’s a coffee cup but the different shadow completely bamboozles the computer.

“It’s only one tiny little piece of information that has changed but it can no longer function to deal with it,” Mann explains. “Whereas we human beings have that ability to say that we now have one extra piece of information, but it’s irrelevant. I’m just going to ignore it and I’m going to pick up the coffee cup.”

That’s why the auto steer moves the machine down a perfectly straight line according to information provided by the GPS, but the human operator still has to take the wheel for the wide turns at the edge of the field. The human knows there’s a ditch with a barbed wire fence there and has the eyeball judgment to know where to start the turn to avoid them. It’s a good way to make use of the strengths of both brains. The computer’s precision makes a straight line with no overlap while the human’s flexibility negotiates the obstacles.

The first step in this is to break down the operation into its distinct tasks.

“There are about seven different subsystems on an air seeder system that we can look at,” Mann says. “We can choose to automate one of them and compare it with automating another and see how this influences overall system efficiency.”

What they’re really looking at right now is how a machine equipped with sensors can monitor these different subsystems and keep the operator informed through the on-board display screen. They’re also working on an efficient way to get machine and operator to communicate with each other. The computer tells the operator what it knows by showing it on the screen, but we don’t want to overwhelm a human with too much information coming too fast. The operator also has to tell the computer what to do through some kind of input device.

“I have to use the keyboard or mouse to tell my laptop what to do and I get information back from my laptop via the screen,” Mann says. “It depends on the layout of the icons and how that is all arranged that defines how efficiently the computer communicates with me. That’s the same type of approach that we’ve been trying for designing an integrated air seed display for that system.”

So the computer gathers data from the different sensors and then relays information to the monitor. The next question for the simulator is how the monitor should display it so that the operator can make sense of it. There’s a fine line sometimes between sending enough information and sending too much. It’s up to the people working in the simulator to find out that balance.

“For example, we have the parameter for the seeding depth and a red flashing light that informs me there’s a problem and I have to make the necessary correction,” Mann says. “We could have a very low level of automation for any one of those subsystems where you have a sensor that detects a problem and it simply alerts a driver that there’s something wrong.”

From there we can test increasing levels of automation where a computer with greater processing power can deliver a detailed analysis of what’s tripping the warning light. A more advanced system could automatically make the correction and deliver an onscreen message explaining what it’s doing and how it’s adjusting the seeding depth.

Computer technology marches on and we’ll continue to develop smaller computers with greater processing power. Along with that we’re developing a variety of sensors that can feed more and more detailed information into a machine’s brain. Ultimately we will have machines that can go to work while the farmer stays home and watches through his laptop working on some accounting spreadsheets.

What may be satisfying to some of the old-school operators out there is the real goal of machine programming. The greatest success will be measured in how much the on-board computer will behave like a really competent human operator. After all, it’s still an experienced human mind that programs that computer, and a human hand that tests the program in a mounted tractor cab.

This article first appeared as ‘Brain plus computer’ in the March 31, 2015 issue of Country Guide

About the author

Comments

explore

Stories from our other publications