Skip to main content

Evaluation of predicted data

Evaluation of predicted data

To convert the overlap between predicted bounding boxes and ground truth bounding boxes into a measure of accuracy and precision, the most common approach is to compare the overlap using the Intersection over Union (IoU) metric. IoU is the ratio of the overlap area of ​​the predicted polygon box with the ground truth polygon box divided by the area of ​​the combined bounding box.

The IoU metric ranges from 0 which is not overlapping at all to 1 which is totally overlapping. In the wider computer vision literature, the common overlap threshold is 0.5, but this value is arbitrary and ultimately irrelevant to any particular ecological problem. We treat boxes with an IoU score greater than 0.4 as true positives, and boxes with scores less than 0.4 as false negatives. A value of 0.4 was chosen for threshold-based visual assessment, which indicates good visual agreement between predicted and observed crowns. We tested a range of overlap thresholds from 0.3 (less overlap between matching crowns) to 0.6 (more overlap between matching crowns) and found that 0.4 balances the strict limit without erroneously deleting for downstream use Analyzed trees.

For this blog, we will be using NeonTreeEvaluation dataset which will be attached at the bottom of the blog.

Here we calculated the evaluation score by resampling the image with various resolutions and then judging model predictions on those various resolutions and we did it because overall the models are too sensitive to input resolution. There are two strategies to improve, the preprocessing, and the model weights so what we thought of we can retrain the model using different zoom augmentations to try to make it more robust to input patch size. 

So first step was to get evaluation data and then resample the image on various resolutions

  def resample_image_in_place(self, image_path, new_res, resample_image):
args = gdal.WarpOptions(
xRes=new_res,
yRes=new_res
)
gdal.Warp(resample_image, image_path, options=args)

where image_path is the path to the image which is the path to the original image and resample_image is the path to the image which has been wrapped in a certain resolution and new_res is the resolution on which the original image would be wrapped.

Now we calculated the evaluation score for the resampled image

   def evaluation_image(self, image_path, resolutionxyz, counter):

# Resampling Image with
self.resample_image_in_place(image_path, resolutionxyz,
                f'/content/data/temp_image_{resolutionxyz}_resolution.tif')

# any_resolution resoluton prediction
image_path_resolutionxyz_resolution =
                f'/content/data/temp_image_{resolutionxyz}_resolution.tif'
boxes = self.model.predict_image(path=image_path_resolutionxyz_resolution,
                                                         return_plot=False)
df = boxes.head()
df['image_path'] = image_path_resolutionxyz_resolution
df.to_csv(f'/content/data/file_{resolutionxyz}_resolution.csv')

# Evaluation of any_resolution
csv_file = f'/content/data/file_{resolutionxyz}_resolution.csv'
root_dir = os.path.dirname(csv_file)
results = self.model.evaluate(csv_file, root_dir, iou_threshold=0.4,
                                                                 savedir=None)
print(str(results["box_recall"]) +
                        f" is evaluation of projected data at {resolutionxyz}")
self.arrX[counter].append(resolutionxyz)
self.arrY[counter].append(results["box_recall"])

First, we resampled the image and then predicted the trees on that particular wrapped image and got the CSV file for the predicted data, and after that, we evaluated the results given by our model by our deepforest function model.evaluate and the recall is the proportion of ground truth which have a true positive match with a prediction based on the intersection-over-union threshold, this threshold is default 0.4 and can be changed in model.evaluate(iou_threshold=<>), and we gonna plot resolution on x axis and evaluation score on y axis and plotting can be done by the following snippet

  def plotting(self):
plt.figure()

for i in range(self.num_images):
plt.scatter(self.arrX[i+1],self.arrY[i+1], marker='o')

plt.xlabel("Resolution")
plt.ylabel("Evaluation")
plt.show()

and snippet to execute the following code is


scores = Scores()

counter=1
for file in glob.iglob(f"/content/NeonTreeEvaluation/evaluation/RGB/*"):
if counter<=scores.num_images:
print(counter)
print(f"Filename: {file}")
try:
scores.results(file, counter,
            resolution_values=[0.20, 0.40, 0.60, 0.80,
                                            1, 1.20, 1.40, 1.60, 1.80, 2, 2.5])
except:
print(f"File has some invalid data, Filename : {file}")
counter+=1

scores.plotting()
plt.close()

where resolution_values has values on which image would be wrapped that is if 0.20 means 20cm, the image gets so blurry on resolution above 100cm so it doesn't make any sense to evaluate these images but we still evaluated to check what the model predicts on those resoluted images.

We used 1000 files on this for the evaluation and plotted the results or it

 




Dataset: NeonTreeEvaluation

Colab notebook: code



Comments

Popular posts from this blog

GSoC Final Report

GSoC Final Report My journey on the Google Summer of Code project passed by so fast, A lot of stuff happened during those three months, and as I’m writing this blog post, I feel quite nostalgic about these three months. GSoC was indeed a fantastic experience. It gave me an opportunity to grow as a developer in an open source community and I believe that I ended up GSoC with a better understanding of what open source is. I learned more about the community, how to communicate with them, and who are the actors in this workflow. So, this is a summary report of all my journey at GSoC 2022. Name : Ansh Dassani Organization:   NumFOCUS- Data Retriever Project title : Training and Evaluation of model on various resolutions Project link:  DeepForest Mentors :  Ben Weinstein ,  Henry Senyondo , Ethan White Introduction                                        DeepForest is a pytho...

Deep Learning

What is deep learning? Deep learning is one of the subsets of machine learning that uses deep learning algorithms to implicitly come up with important conclusions based on input data. Genrally deeplearning is unsupervised learning or semi supervised learning and is based on representation learning that is a set of techniques that allows a system to automatically discover the representations needed for feature detection or classification from raw data. This replaces manual feature engineering and allows a machine to both learn the features and use them to perform a specific task, it learns from representative examples. For example: if you want to build a model that recognizes trees, you need to prepare a database that includes a lot of different tree images. The main architectures of deep learning are: -Convolutional neural networks -Recurrent neural networks -Generative adversarial networks -Recursive neural networks I'll be talking about them more in later part of this blog. Diffe...

Sensitivity of model to input resolution

Sensitivity of the model to resolution The Deepforest model was trained on 10cm data at 400px crops is way too sensitive to the input resolution of images and quality of images, and it tends to give inaccurate results on these images and it's not possible to always have images from drones from a particular height that is 10cm in our case, so we have to come up with a solution for how to get better results for multiple resolutions of data. So we have two solutions to get better predictions, which can be preprocessing of data and retraining the model on different input resolutions. In preprocessing what we can do is to try to get nearby patch size to give better results as the resolution of the input data decreases compared to the data used to train the model, the patch size needs to be larger and we can pass the appropriate patch size in ```predict_tile``` function, but retaining of the model on different resolutions can be more amount of work but yes we tried to achieve it by evalu...