North Carolina State University Plant Sciences Initiative Platform Director for Resilient Agriculture Chris Reberg-Horton and Esleyther Henriquez, field team leader for NC State's precision sustainable agriculture lab, with BenchBot 3.0, which will take hundreds of thousands of plant photos for what will be the world's largest agricultural image repository. (Image: North Carolina State University Plant Sciences Initiative)
Tech Briefs: Could you describe your project.
Chris Reberg-Horton: I'm an applied agricultural scientist, so this project is applying AI to my work at the North Carolina State University Plant Sciences Initiative. We use AI to interpret camera data, make sense of it, and then make decisions based on that. For camera images to be interpretable using AI, we have to first train the software to recognize and identify the images. So, for example, if you've done a CAPTCHA, you're signing onto a website that says something like: pick all the pictures with a bicycle in it or a stoplight. First, it's there to make sure you're not a bot, but the second goal is they tag all of those decisions everybody has made, and they use that for training the AI. You can imagine for a self-driving car, knowing the location of a bicycle, or a stoplight, or pedestrian is a big deal.
So, this is what we need in agriculture. We’ve built a robot named BenchBot 3.0, which has a little computer and camera eyes looking at the world. It has to identify every plant as well as the soil. So, we take hundreds of thousands of all the images that you might see on a farmscape, all the cash crops, weeds, and cover crops. We can also add in crops under stress or crops that are being attacked by an insect — the computer gets smarter and smarter in interpreting what it's looking at.
This summer we're doing major weeds that grow on farms in North America — we've done that for a couple of summers. We try to image them every day from the second they emerge until they get to be a decent size. We also need more than one representative of every species, because plants are just like humans, everyone looks a little bit different. You can't look at just one and say you've got it. So, we try to grab at least 300 representatives of every species to make sure that we've seen the entire range of variations within each.
We then create synthetic training images, from images of plants growing in pots and by also going into the field and imaging plants there as well. It's a painful process to get the kind of imagery we need. It has to be done in a very particular way, so it costs a lot of money to do it in the field.
What we do is extract from each of our images, the image of a particular weed at a particular stage of growth, for example, and label it. We call that a virtual asset. We do the same for different types of soil. We literally have hundreds of thousands of these assets. We then assemble different combinations of these assets into a sort of photomontage using the separate images to create a synthetic scene. And since each asset has been labeled, we are able to identify every pixel of the synthetic image.
The synthetic images are then used by our newly acquired NVIDIA Grace Hopper supercomputer to train the AI. The computer makes a guess at how to identify the image and we let it know whether it’s right or wrong.
Tech Briefs: So, once AI has correctly identified the synthetic image, what do you do with that information?
Reberg-Horton: Agriculture, unlike the self-driving car industry, does not have lots of free data sets out there. Since we’ve been building an image repository for several years, to use on two projects of our own, we decided to open it up to the public for AI training. We want to equalize the field with a massive database that anyone, large or small, can use to train AI for their own specific application.
One of our projects is funded by the US Soybean Board for mapping weeds. Here in America, and in North Carolina in particular, we have a problem with weeds that are going back to being as bad as they were 30-40 years ago before we had a lot of technology. That's because the weeds have evolved resistance to the herbicides we've been throwing at them — we're losing the control mechanisms. So, mapping weeds has been a hot topic. With our database, we can help farmers gather dynamic knowledge about how the weeds are responding to their various control tactics, where they’re getting better, where they’re getting worse.
Tech Briefs: So, does the farmer need a robot to image his fields?
Reberg-Horton: For the mapping, we don't have to go to a robot. For this project, we have built a camera system that gets deployed on sprayers. We can put up to eight cameras across the spray boom, which can accommodate the largest spray booms being used in agriculture.
When the sprayers go out and do their last application, be it a fertilizer or a pesticide, these cameras are mapping every plant they see. So, at that point, if a weed is still being seen by the sprayer, that's a sign you've missed it, ‘you've got resistance, this is a problem.’
Obtaining this data comes at no extra cost; the sprayers are already driving across the field, doing a necessary job, so they're just using this camera system to take data while they're doing it. That makes us very inexpensive because all you have is the hardware cost for the camera system, but you don't have any extra labor assigned to taking the data. Then over the winter, the farmers could look at those maps and make decisions about how they're going to deal with their improving weed problem, or worsening weed problem.
The other application we’re building for is a very environmental one, where we map cover crop performance. Cover crops are grown in between cash crops. They are just there for protecting the soil for environmental reasons. And for that, we need to know the amount of biomass of the cover crop. You need to know, for example, whether it’s a legume or a grass or a broadleaf — at least the general category. Even species-level knowledge is handy because it affects what we do for nitrogen. And we need to know that in order to set our nitrogen applications.
Tech Briefs: Does your application help with analyzing soil factors, such as moisture?
Reberg-Horton: No, our application works just by watching plants. However, the plant is an interesting thing to watch — it's almost like a soil sensor in and of itself. So, for instance, we have soil moisture sensors to understand if the crop is getting dried out. But, you can see if a plant is starting to get stressed, and deduce that might be because of inadequate soil moisture.
It’s also critical to know the nitrogen levels in the soil, but we can’t tell the grower exactly what nitrogen rate a field needs — they have to use their own map. They might think that one part of the field needs this much nitrogen, and another part needs that much nitrogen. They have done that through evolving their understanding of the productive areas of the field and the less productive areas of the field.
But what we layer on top of that is a correction factor for the nitrogen they have obtained by growing legumes in their cover crops. So, if their map says, ‘We're going to put 100 pounds of nitrogen here,’ but I say ‘You already grew 40 pounds of that,’ then we can tell them go ahead and reduce their rate by 40 percent for that pixel. But they have to bring some intelligence about the field to the table themselves.
Tech Briefs: How does the farmer get the information from you?
Reberg-Horton: I think most people don’t realize how far digital farming has come in a lot of ways. For example, a new combine has what we call a yield monitor on it, which reports yield in small pixels across the field. That’s information that a grower uses to understand the productive and unproductive areas of their fields.
Farm equipment is really smart, so growers can use the cloud to access online dashboards where they're maintaining their data. They can look at graphs, and they can visualize their fields. That's the way they would interact with our tools as well. If the camera system drives across the field, we'll upload their data. We have both cellular gateways and the Starlink satellite system. So, once we get their data into our servers, we're able to make the maps, which the grower can download through their browser and load into their equipment.
Tech Briefs: Can you tell me about your Grace Hopper supercomputer.
Reberg-Horton: A graphics processing unit (GPU) is an ideal place to train AI because it can do massive amounts of operations in parallel. The Grace Hopper is really revolutionary because it combines the central processing unit (CPU) and the GPU on a single hybrid chip. That reduces the amount of time and latency for the CPU and the GPU to communicate to each other.
Tech Briefs: How, exactly, do you use AI?
Reberg-Horton: Once we've got all of the synthetic images done, it’s AI training time. You feed the images to the Grace Hopper, and it does the training. Once you've trained the AI you have a model, which you can download and move to other places.
One of the things our camera system does is what we call edge computing. Instead of using large cloud-based servers, we use a little NVIDIA product called a Jetson. Every time a picture is taken, it gets shown to the embedded edge AI, which identifies and labels, say, the pig weeds here, another weed there, and a bit of crop. That's what's used in the mapping function.
Tech Briefs: Will small farmers, who can’t afford expensive equipment, be able to use this?
Reberg-Horton: I think there is a really positive story that could be told. A lot of agricultural technology, like modern combines, can't be accessed by a small farmer — a combine can cost over $1 million. But we have a single-camera variant of it. We use an even smaller computer because it only has to keep up with one camera. And we can have it report to a tablet. So, for under $1,000, we can create a kit that does the exact same thing as this huge multi-camera setup. A small farmer could afford it, and they could walk it around or they could put it on an ATV. Once you've trained the AI — it serves the small farmer and the large farmer alike.
Tech Briefs: Doesn’t the grower know what's growing where, based on what and where he planted?
Reberg-Horton: Good question. Say, for the cover crop, once there is a mixture, nature dictates who wins and who loses, and it dictates that variably across the field — that's the fundamental problem. We know that we may only see these five species of a cover crop, but we don't know whether one area of the field is dominated by one species and the next area of the field is dominated by another.
There's a federal incentives program for cover cropping, it’s an environmental subsidy growers receive. They've been pushing more and more mixtures, so they have some grass, some legumes, and sometimes even some brassicas. Farmers generally grow cover crops over the winter, when they’re not growing a cash crop. Parts of the field will become completely grass dominated; it might be totally a rye cover crop or in another part of the field, it might be a legume like a clover or a vetch. That’s a good thing; diversity has many functions. But the legumes can bring a lot of nitrogen to the next crop, while the area of the field that was all grass won't have that effect. So now I need to treat each part of the field with a different nitrogen rate. There's some inherent soil variation, so precision agriculture is trying to more precisely manage nitrogen, not just on a field average, but pixel by pixel. So, we really need a pixel-by-pixel understanding of how much legume nitrogen is out there so that we can accommodate for that. That's what this would do for them: make a map. The grower can then plug that map straight into their nitrogen applicator to vary the application rate all across that field.
Tech Briefs: So that's what happens with cover crops, but how about the cash crop, isn't that uniform?
Reberg-Horton: Yes, they know what that is. But oftentimes the task for the AI is making sure that any other plant like a weed growing in there is separated from the crop. A good example of this is John Deere’s See and Spray system. That allows the farmer to use a herbicide that will kill both the crop and the weed, which is not generally how we use herbicides. Most of our herbicides are selective in that they will kill some sets of weeds, but not the crop. But you can take another route and use a herbicide that kills almost every plant. Then, I'm going to use computer vision to say: ‘crop, crop, crop — there's a weed, hit it now.’
Weed management has been the biggest target we've seen for robots. There are a couple of places where robots have been very successful. John Deere even has robots that use lasers to try to kill weeds. And there are all kinds of mechanical devices as well. So, we're using computer vision to sort crops from weeds. Our repository would serve that use case as well.
Tech Briefs: What are your next steps?
Reberg-Horton: One of the things we're trying to do is to inspire other scientists around the world to start contributing. We can't train AI without all the necessary imagery, so we’re hoping to inspire others to start contributing data sets and eliminate that bottleneck.
Tech Briefs: Would they send their data to you to use?
Reberg-Horton: We’re certainly open to that, or for them to get it out there in the public — it's winning either way. But we do need to see more data in the public sphere. It's a good sector for publicly funded people, since private companies are incentivized to keep their data to themselves. So, it's nice to have the public sector come in and contribute.
Another thing that we at the Plant Sciences Initiative are trying to do is to stimulate more of the engineering community to think about agriculture. We have a bunch of engineering students in the building and they're oftentimes unaware of all these tasks in agriculture. They've heard about the self-driving cars, and a lot of them may have worked in medical imagery, which has been a huge growing area, but they’ve never even thought about agriculture. But that's what we're going to need for this revolution.
Tech Briefs: When do you expect your system to be applied in the field?
Reberg-Horton: We're going to be deployed on 15,000 acres in the spring of 2025 with our camera prototype. That is happening as a partnership with The Nature Conservancy and is funded by a program called the Regional Conservation Partnership Program (RCPP). The farmers in the Chesapeake Bay watershed, which is spread around four states, are able to enroll in this program to use the technology. They get a subsidy both for the conservation practice of cover cropping, but also for reducing their nitrogen rates. So, if we prescribe a reduction in nitrogen rates, and they follow that, it gets them the subsidy. And what we get out of that, is testing at a mass scale.
We're putting the prototypes on about a dozen sprayers in that region. Also, sometimes farmers run out of time to do things, so they hire people called custom applicators to come in and spray fields, or to plant, or even harvest for them when they're running behind. So, we're going to deploy these on custom applicator rigs, and train the custom applicators in the use of the technology. Then we'll study it as it comes in. We’ll look at, say, how much nitrogen reduction we caused — whether we were right.
Tech Briefs: So, is the ultimate long-term goal of all this to improve productivity and sustainability?
RELATED LINKS: Sharp-Shooting Farm Robot Smart Agriculture Sensors: Helping Small Farmers and Positively Impacting Global Issues, Too
Reberg-Horton: Exactly, it can be a real win-win here. Nitrogen is costly for farmers, so if we save them on nitrogen inputs, they win. The environmental gains from reducing nitrogen are a big driver. Nitrogen is a contaminant in our groundwater and in our estuaries. We're testing this first in the Chesapeake Bay region, since it has been a very long-standing problem there, which a lot of government money is trying to solve. But we will be moving it to other regions of the country. We hope to be in the Southeast in the spring of 2026.
Topics:
Artificial Intelligence Data Acquisition Imaging Robotics
Tech Briefs
Generative AI Models Solve Multistep Robot Manipulation Problems
Mobility Engineering
3D Data Acquisition Platform for Human Activity Understanding
Tech Briefs
Using AI to Track Professional Hockey Players
Photonics & Imaging Technology
“Alexa, Go to The Kitchen and Fetch Me a Snack”
Autonomous Vehicle Engineering
Deere Touts Software, AI Tech at CES
Tech Briefs
'Artificial Memory' Can Help Robots Find Missing Objects
Tech Briefs
AI Can Run 10,000 Microbial Experiments per Day
Tech Briefs
With A.I., Robotic Exoskeletons Gain Self-Control
Mobility Engineering
Deere’s Ingredients for Innovation
Photonics & Imaging Technology INSIDER
Optical Neural Networks Could Speed Image Processing Time
Mobility Engineering
Aftermarket Augments Off-Highway Autonomous Efforts
Tech Briefs
Autonomous Vehicle Engineering
Continental's Lidar in a Flash
Photonics & Imaging Technology
Recovering Data: NIST’s Neural Network Model Finds Small Objects in Dense Images
Mobility Engineering
Volvo CE Pursues AI for Operator-Assist Functions on Path to Full Automation
Motion Design INSIDER
Treating Liver Cancer with Microrobots Piloted by a Magnetic Field