In collaboration with NPEC, I integrated Computer Vision and Robotics for plant phenotyping. Focusing on detecting primary root tips using traditional and deep learning techniques for precise detection and segmentation. Followed by a robotic simulation model using reinforcement learning to train the inoculation process of the plants.
For our study, were tasked with developing a computer vision, reinforcement learning, and robotics pipeline that will segment plant roots from images and control a liquid handling robot for inoculating plants in precise locations. NPEC wanted to automate the process of inoculating plants, which will significantly enhance research efficiency and accuracy and save valuable time and resources.
I’d like to share my journey, the challenges I faced, and the solutions I developed along the way.
Phenotyping involves measuring the physical and biological characteristics of plants, particularly their interactions with microbes. Accurate measurements are extremely important, as even small errors can lead to significant variations in research results. My primary goal was to create a reliable system that could accurately identify and measure root structures in a variety of environments.
This project consists of three parts, computer vision, robotics and reinforcement learning, which allowed me to build a sophisticated pipeline for detecting root tips and analysing their growth patterns.
My project started with developing a robust computer vision pipeline. I experimented with different segmentation models to improve the accuracy of root tip detection. This phase was essential because visualising and measuring root structures is key to understanding how plants interact with microbes. I iterated through different models and approaches, making sure to meet the accuracy requirements set by NPEC.
The results of the instance segmentation on the roots using my deep learning model can be seen below:
In the following picture I am detecting the root landmarks using landmark detection. The landmarks can then be used in the Simulation environment to train the inoculation process.
Next, I created a simulation environment using pyBullet, a physics simulation tool. This virtual environment allowed me to test the movements of my robotic systems effectively. I evaluated the robot's ability to navigate a defined workspace, ensuring that it could reach critical points in my experiments. I was pleased to find that the simulation produced consistent results, providing a solid foundation for the tasks that followed.
With a working computer vision model and simulation environment, I created a wrapper for the Gymnasium framework. This allowed me to use Stable Baselines 3 for reinforcement learning, enabling my robot to learn how to move to specific positions within its workspace. My reinforcement learning algorithm achieved remarkable accuracy, locating target positions to within 1mm - an exciting milestone in my project!
At the same time, I developed a PID (Proportional-Integral-Derivative) controller to compare its performance with the reinforcement learning model. The PID controller showed an accuracy of 6 mm. While this was a solid performance, it still fell short of the performance of the reinforcement learning approach. This highlighted the benefits of using machine learning techniques in robot control.
The final stage of my project involved integrating the computer vision pipeline, the reinforcement learning model and the PID controller into two cohesive systems. This integration presented challenges, particularly in converting pixel coordinates from my computer vision outputs into real world measurements. However, I successfully overcame these challenges by developing a conversion formula that bridged the gap between the different systems.
My research into plant-microbe interactions showed that the reinforcement learning model outperformed traditional control methods, making it an invaluable tool for future research. This project demonstrated the potential of combining traditional computer vision with deep learning techniques to achieve accurate root length measurements.
While I have achieved my primary goals, there is still room for improvement. I believe that fine-tuning the parameters in my PID controller and experimenting with different reward functions in the reinforcement learning model could further improve accuracy and efficiency.
This project has been an incredibly enriching experience that has deepened my understanding of how technology can advance plant science. The journey from developing a computer vision pipeline to integrating multiple systems has provided me with valuable insights into the complexities of plant-microbe interactions.